47 CFR 65.700 - Determining the maximum allowable rate of return.
2010-10-01
... CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Maximum Allowable Rates of Return § 65.700 Determining the maximum allowable rate of return. (a) The maximum allowable rate of return for any exchange carrier's earnings on any access service category shall...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
32 CFR 842.35 - Depreciation and maximum allowances.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide”...
49 CFR 174.86 - Maximum allowable operating speed.
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15...
Scientific substantination of maximum allowable concentration of fluopicolide in water
Pelo I.М.
2014-03-01
Full Text Available In order to substantiate fluopicolide maximum allowable concentration in the water of water reservoirs the research was carried out. Methods of study: laboratory hygienic experiment using organoleptic and sanitary-chemical, sanitary-toxicological, sanitary-microbiological and mathematical methods. The results of fluopicolide influence on organoleptic properties of water, sanitary regimen of reservoirs for household purposes were given and its subthreshold concentration in water by sanitary and toxicological hazard index was calculated. The threshold concentration of the substance by the main hazard criteria was established, the maximum allowable concentration in water was substantiated. The studies led to the following conclusions: fluopicolide threshold concentration in water by organoleptic hazard index (limiting criterion – the smell – 0.15 mg/dm3, general sanitary hazard index (limiting criteria – impact on the number of saprophytic microflora, biochemical oxygen demand and nitrification – 0.015 mg/dm3, the maximum noneffective concentration – 0.14 mg/dm3, the maximum allowable concentration - 0.015 mg/dm3.
Maximum Allowable Dynamic Load of Mobile Manipulators with Stability Consideration
Heidary H. R.
2015-09-01
Full Text Available High payload to mass ratio is one of the advantages of mobile robot manipulators. In this paper, a general formula for finding the maximum allowable dynamic load (MADL of wheeled mobile robot is presented. Mobile manipulators operating in field environments will be required to manipulate large loads, and to perform such tasks on uneven terrain, which may cause the system to reach dangerous tip-over instability. Therefore, the method is expanded for finding the MADL of mobile manipulators with stability consideration. Moment-Height Stability (MHS criterion is used as an index for the system stability. Full dynamic model of wheeled mobile base and mounted manipulator is considered with respect to the dynamic of non-holonomic constraint. Then, a method for determination of the maximum allowable loads is described, subject to actuator constraints and by imposing the stability limitation as a new constraint. The actuator torque constraint is applied by using a speed-torque characteristics curve of a typical DC motor. In order to verify the effectiveness of the presented algorithm, several simulation studies considering a two-link planar manipulator, mounted on a mobile base are presented and the results are discussed.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
46 CFR 52.01-55 - Increase in maximum allowable working pressure.
2010-10-01
... 46 Shipping 2 2010-10-01 2010-10-01 false Increase in maximum allowable working pressure. 52.01-55... POWER BOILERS General Requirements § 52.01-55 Increase in maximum allowable working pressure. (a) When the maximum allowable working pressure of a boiler has been established, an increase in the pressure...
COMPASS' new magnet is placed inside the experiment, which will allow for maximum acceptance
Maximilien Brice
2005-01-01
A new magnet at CERN is going to allow COMPASS (Common Muon Proton Apparatus for Structure and Spectroscopy) maximum acceptance. Thanks to the 5 tonne, 2.5 m long magnet, which arrived last December, many more events are expected compared to the previous data-taking
2012-09-13
... Paperwork Reduction Act (44 U.S.C. 3501 et seq.); Is certified as not having a significant economic impact... into the new Missouri rule include: --10 CSR 10-2.040, Maximum Allowable Emission of Particulate Matter from Fuel Burning Equipment Used for Indirect Heating, for the Kansas City Metropolitan Area; --10 CSR...
49 CFR 192.619 - Maximum allowable operating pressure: Steel or plastic pipelines.
2010-10-01
... operate a segment of steel or plastic pipeline at a pressure that exceeds a maximum allowable operating... design pressure of the weakest element in the segment, determined in accordance with subparts C and D of... K of this part, if any variable necessary to determine the design pressure under the design...
2010-10-01
... distribution systems. (a) No person may operate a low-pressure distribution system at a pressure high enough to...) No person may operate a low pressure distribution system at a pressure lower than the minimum... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum and minimum allowable operating...
49 CFR 192.621 - Maximum allowable operating pressure: High-pressure distribution systems.
2010-10-01
... STANDARDS Operations § 192.621 Maximum allowable operating pressure: High-pressure distribution systems. (a) No person may operate a segment of a high pressure distribution system at a pressure that exceeds the... segment of a distribution system otherwise designed to operate at over 60 p.s.i. (414 kPa) gage,...
Impact of Maximum Allowable Cost on CO2 Storage Capacity in Saline Formations.
Mathias, Simon A; Gluyas, Jon G; Goldthorpe, Ward H; Mackay, Eric J
2015-11-17
Injecting CO2 into deep saline formations represents an important component of many greenhouse-gas-reduction strategies for the future. A number of authors have posed concern over the thousands of injection wells likely to be needed. However, a more important criterion than the number of wells is whether the total cost of storing the CO2 is market-bearable. Previous studies have sought to determine the number of injection wells required to achieve a specified storage target. Here an alternative methodology is presented whereby we specify a maximum allowable cost (MAC) per ton of CO2 stored, a priori, and determine the corresponding potential operational storage capacity. The methodology takes advantage of an analytical solution for pressure build-up during CO2 injection into a cylindrical saline formation, accounting for two-phase flow, brine evaporation, and salt precipitation around the injection well. The methodology is applied to 375 saline formations from the U.K. Continental Shelf. Parameter uncertainty is propagated using Monte Carlo simulation with 10 000 realizations for each formation. The results show that MAC affects both the magnitude and spatial distribution of potential operational storage capacity on a national scale. Different storage prospects can appear more or less attractive depending on the MAC scenario considered. It is also shown that, under high well-injection rate scenarios with relatively low cost, there is adequate operational storage capacity for the equivalent of 40 years of U.K. CO2 emissions.
Iammarino, Marco; Di Taranto, Aurelia; Muscarella, Marilena
2012-02-01
Sulphiting agents are commonly used food additives. They are not allowed in fresh meat preparations. In this work, 2250 fresh meat samples were analysed to establish the maximum concentration of sulphites that can be considered as "natural" and therefore be admitted in fresh meat preparations. The analyses were carried out by an optimised Monier-Williams Method and the positive samples confirmed by ion chromatography. Sulphite concentrations higher than the screening method LOQ (10.0 mg · kg(-1)) were found in 100 samples. Concentrations higher than 76.6 mg · kg(-1), attributable to sulphiting agent addition, were registered in 40 samples. Concentrations lower than 41.3 mg · kg(-1) were registered in 60 samples. Taking into account the distribution of sulphite concentrations obtained, it is plausible to estimate a maximum allowable limit of 40.0 mg · kg(-1) (expressed as SO(2)). Below this value the samples can be considered as "compliant".
Lihui Guo
2015-01-01
Full Text Available With the increasing penetration of wind power, the randomness and volatility of wind power output would have a greater impact on safety and steady operation of power system. In allusion to the uncertainty of wind speed and load demand, this paper applied box set robust optimization theory in determining the maximum allowable installed capacity of wind farm, while constraints of node voltage and line capacity are considered. Optimized duality theory is used to simplify the model and convert uncertainty quantities in constraints into certainty quantities. Under the condition of multi wind farms, a bilevel optimization model to calculate penetration capacity is proposed. The result of IEEE 30-bus system shows that the robust optimization model proposed in the paper is correct and effective and indicates that the fluctuation range of wind speed and load and the importance degree of grid connection point of wind farm and load point have impact on the allowable capacity of wind farm.
Evaluation of maximum allowable temperature inside basket of dry storage module for CANDU spent fuel
Lee, Kyung Ho; Yoon, Jeong Hyoun; Chae, Kyoung Myoung; Choi, Byung Il; Lee, Heung Young; Song, Myung Jae [Nuclear Environment Technology Institute, Taejon (Korea, Republic of); Cho, Gyu Seong [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)
2002-10-01
This study provides a maximum allowable fuel temperature through a preliminary evaluation of the UO{sub 2} weight gain that may occur on a failed (breached sheathing) element of a fuel bundle. Intact bundles would not be affected as the UO{sub 2} would not be in contact with the air for the fuel storage basket. The analysis is made for the MACSTOR/KN-400 to be operated in Wolsong ambient air temperature conditions. The design basis fuel is a 6-year cooled fuel bundle that, on average has reached a burnup of 7,800 MWd/MTU. The fuel bundle considered for analysis is assumed to have a high burnup of 12,000 MWd/MTU and be located in a hot basket. The MACSTOR/KN-400 has the same air circuit as the MACSTOR and the air circuit will require a slightly higher temperature difference to exit the increased heat load. The maximum temperature of a high burnup bundle stored in the new MACSTOR/KN-400 is expected to be about 9 .deg. C higher than the fuel temperature of the MACSTOR at an equivalent constant ambient temperature. This temperature increase will in turn increase the UO{sub 2} weight gain from 0.06% (MACSTOR for Wolsong conditions) to an estimated 0.13% weight gain for the MACSTOR/KN-400. Compared to an acceptable UO{sub 2} weight gain of 0.6%, we are thus expecting to maintain a very acceptable safety factor of 4 to 5 for the new module against unacceptable stresses in the fuel sheathing. For the UO{sub 2} weight gain, the maximum allowable fuel temperature was shown by 164 .deg. C.
The Maximum Free Magnetic Energy Allowed in a Solar Active Region
Moore, Ronald L.; Falconer, David A.
2009-01-01
Two whole-active-region magnetic quantities that can be measured from a line-of-sight magnetogram are (sup L) WL(sub SG), a gauge of the total free energy in an active region's magnetic field, and sup L(sub theta), a measure of the active region's total magnetic flux. From these two quantities measured from 1865 SOHO/MDI magnetograms that tracked 44 sunspot active regions across the 0.5 R(sub Sun) central disk, together with each active region's observed production of CMEs, X flares, and M flares, Falconer et al (2009, ApJ, submitted) found that (1) active regions have a maximum attainable free magnetic energy that increases with the magnetic size (sup L) (sub theta) of the active region, (2) in (Log (sup L)WL(sub SG), Log(sup L) theta) space, CME/flare-productive active regions are concentrated in a straight-line main sequence along which the free magnetic energy is near its upper limit, and (3) X and M flares are restricted to large active regions. Here, from (a) these results, (b) the observation that even the greatest X flares produce at most only subtle changes in active region magnetograms, and (c) measurements from MSFC vector magnetograms and from MDI line-of-sight magnetograms showing that practically all sunspot active regions have nearly the same area-averaged magnetic field strength: =- theta/A approximately equal to 300 G, where theta is the active region's total photospheric flux of field stronger than 100 G and A is the area of that flux, we infer that (1) the maximum allowed ratio of an active region's free magnetic energy to its potential-field energy is 1, and (2) any one CME/flare eruption releases no more than a small fraction (less than 10%) of the active region's free magnetic energy. This work was funded by NASA's Heliophysics Division and NSF's Division of Atmospheric Sciences.
Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems.
Schymanski, Stanislaus J; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin
2010-05-12
Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 x 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed.
Estimation of Maximum Allowable PV Connection to LV Residential Power Networks
Demirok, Erhan; Sera, Dezso; Teodorescu, Remus
2011-01-01
transformer or using solar inverters with new grid support features. This study presents a methodology for the estimation of maximum PV hosting capacity including IEC 60076-7 based thermal model of distribution transformer. Certain part of a real distribution network of Braedstrup suburban area in Denmark...... is used in simulation as a case study model. Furthermore, varying solutions (utilizing thermally upgraded insulation paper in transformers, reactive power services from solar inverters, etc.) are implemented on the network under investigation to examine PV penetration level and finally key results learnt......Maximum photovoltaic (PV) hosting capacity of low voltage (LV) power networks is mainly restricted by either thermal limits of network components or grid voltage quality resulted from high penetration of distributed PV systems. This maximum hosting capacity may be lower than the available solar...
Mean square convergence rates for maximum quasi-likelihood estimator
Arnoud V. den Boer
2015-03-01
Full Text Available In this note we study the behavior of maximum quasilikelihood estimators (MQLEs for a class of statistical models, in which only knowledge about the first two moments of the response variable is assumed. This class includes, but is not restricted to, generalized linear models with general link function. Our main results are related to guarantees on existence, strong consistency and mean square convergence rates of MQLEs. The rates are obtained from first principles and are stronger than known a.s. rates. Our results find important application in sequential decision problems with parametric uncertainty arising in dynamic pricing.
The tropical lapse rate steepened during the Last Glacial Maximum.
Loomis, Shannon E; Russell, James M; Verschuren, Dirk; Morrill, Carrie; De Cort, Gijs; Sinninghe Damsté, Jaap S; Olago, Daniel; Eggermont, Hilde; Street-Perrott, F Alayne; Kelly, Meredith A
2017-01-01
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become less steep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountain environments. However, the sensitivity of the lapse rate to climate change is uncertain because of poor constraints on high-elevation temperature during past climate states. We present a 25,000-year temperature reconstruction from Mount Kenya, East Africa, which demonstrates that cooling during the Last Glacial Maximum was amplified with elevation and hence that the lapse rate was significantly steeper than today. Comparison of our data with paleoclimate simulations indicates that state-of-the-art models underestimate this lapse-rate change. Consequently, future high-elevation tropical warming may be even greater than predicted.
The tropical lapse rate steepened during the Last Glacial Maximum
Loomis, Shannon E.; Russell, James M.; Verschuren, Dirk; Morrill, Carrie; De Cort, Gijs; Sinninghe Damsté, Jaap S.; Olago, Daniel; Eggermont, Hilde; Street-Perrott, F. Alayne; Kelly, Meredith A.
2017-01-01
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become less steep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountain environments. However, the sensitivity of the lapse rate to climate change is uncertain because of poor constraints on high-elevation temperature during past climate states. We present a 25,000-year temperature reconstruction from Mount Kenya, East Africa, which demonstrates that cooling during the Last Glacial Maximum was amplified with elevation and hence that the lapse rate was significantly steeper than today. Comparison of our data with paleoclimate simulations indicates that state-of-the-art models underestimate this lapse-rate change. Consequently, future high-elevation tropical warming may be even greater than predicted. PMID:28138544
Maximum orbit plane change with heat-transfer-rate considerations
Lee, J. Y.; Hull, D. G.
1990-01-01
Two aerodynamic maneuvers are considered for maximizing the plane change of a circular orbit: gliding flight with a maximum thrust segment to regain lost energy (aeroglide) and constant altitude cruise with the thrust being used to cancel the drag and maintain a high energy level (aerocruise). In both cases, the stagnation heating rate is limited. For aeroglide, the controls are the angle of attack, the bank angle, the time at which the burn begins, and the length of the burn. For aerocruise, the maneuver is divided into three segments: descent, cruise, and ascent. During descent the thrust is zero, and the controls are the angle of attack and the bank angle. During cruise, the only control is the assumed-constant angle of attack. During ascent, a maximum thrust segment is used to restore lost energy, and the controls are the angle of attack and bank angle. The optimization problems are solved with a nonlinear programming code known as GRG2. Numerical results for the Maneuverable Re-entry Research Vehicle with a heating-rate limit of 100 Btu/ft(2)-s show that aerocruise gives a maximum plane change of 2 deg, which is only 1 deg larger than that of aeroglide. On the other hand, even though aerocruise requires two thrust levels, the cruise characteristics of constant altitude, velocity, thrust, and angle of attack are easy to control.
Eun-Chan Kim
2016-06-01
Full Text Available The International Convention for the Control and Management of Ships’ Ballast Water and Sediments was adopted by IMO (International Maritime Organization on 13 February 2004. Fifty-seven ballast water management systems were granted basic approval of active substance by IMO, among which thirty-seven systems were granted final approval. This paper studies the maximum allowable dosage of active substances produced by ballast water management system using electrolysis which is an approved management system by IMO. The allowable dosage of active substances by electrolysis system is proposed by TRO (Total Residual Oxidant. Maximum allowable dosage of TRO is a very important factor in the ballast water management system when using the electrolysis methods, because ballast water management system is controlled with the TRO value, and the IMO approvals are given on the basis of the maximum allowable dosage of TRO for the treatment and discharge of ballast water. However, between various management systems approved TRO concentration of maximum allowable dosage showed large differences, ranging from 1 to 15 ppm, depending on the management systems. The discrepancies of maximum allowable dosage among the management systems may depend on whether a filter is used or not, the difference in the specifications of the electrolysis module, the kind of the tested organisms, the number of individual organisms, and the difference in the water quality, etc. Ship owners are responsible for satisfying the performance standard of the IMO convention in the ports of each country therefore need to carefully review whether the ballast water management system can satisfy the performance standard of the IMO convention or not.
Maximum, minimum, and optimal mutation rates in dynamic environments
Ancliff, Mark; Park, Jeong-Man
2009-12-01
We analyze the dynamics of the parallel mutation-selection quasispecies model with a changing environment. For an environment with the sharp-peak fitness function in which the most fit sequence changes by k spin flips every period T , we find analytical expressions for the minimum and maximum mutation rates for which a quasispecies can survive, valid in the limit of large sequence size. We find an asymptotic solution in which the quasispecies population changes periodically according to the periodic environmental change. In this state we compute the mutation rate that gives the optimal mean fitness over a period. We find that the optimal mutation rate per genome, k/T , is independent of genome size, a relationship which is observed across broad groups of real organisms.
Predicting the solar maximum with the rising rate
Du, Z L
2011-01-01
The growth rate of solar activity in the early phase of a solar cycle has been known to be well correlated with the subsequent amplitude (solar maximum). It provides very useful information for a new solar cycle as its variation reflects the temporal evolution of the dynamic process of solar magnetic activities from the initial phase to the peak phase of the cycle. The correlation coefficient between the solar maximum (Rmax) and the rising rate ({\\beta}a) at {\\Delta}m months after the solar minimum (Rmin) is studied and shown to increase as the cycle progresses with an inflection point (r = 0.83) at about {\\Delta}m = 20 months. The prediction error of Rmax based on {\\beta}a is found within estimation at the 90% level of confidence and the relative prediction error will be less than 20% when {\\Delta}m \\geq 20. From the above relationship, the current cycle (24) is preliminarily predicted to peak around October 2013 with a size of Rmax =84 \\pm 33 at the 90% level of confidence.
Measurement and relevance of maximum metabolic rate in fishes.
Norin, T; Clark, T D
2016-01-01
Maximum (aerobic) metabolic rate (MMR) is defined here as the maximum rate of oxygen consumption (M˙O2max ) that a fish can achieve at a given temperature under any ecologically relevant circumstance. Different techniques exist for eliciting MMR of fishes, of which swim-flume respirometry (critical swimming speed tests and burst-swimming protocols) and exhaustive chases are the most common. Available data suggest that the most suitable method for eliciting MMR varies with species and ecotype, and depends on the propensity of the fish to sustain swimming for extended durations as well as its capacity to simultaneously exercise and digest food. MMR varies substantially (>10 fold) between species with different lifestyles (i.e. interspecific variation), and to a lesser extent (aerobic scope, interest in measuring this trait has spread across disciplines in attempts to predict effects of climate change on fish populations. Here, various techniques used to elicit and measure MMR in different fish species with contrasting lifestyles are outlined and the relevance of MMR to the ecology, fitness and climate change resilience of fishes is discussed.
Dobranich, D.
1987-08-01
Calculations were performed to determine the mass of a space-based platform as a function of the maximum-allowed operating temperature of the electrical equipment within the platform payload. Two computer programs were used in conjunction to perform these calculations. The first program was used to determine the mass of the platform reactor, shield, and power conversion system. The second program was used to determine the mass of the main and secondary radiators of the platform. The main radiator removes the waste heat associated with the power conversion system and the secondary radiator removes the waste heat associated with the platform payload. These calculations were performed for both Brayton and Rankine cycle platforms with two different types of payload cooling systems: a pumped-loop system (a heat exchanger with a liquid coolant) and a refrigerator system. The results indicate that increases in the maximum-allowed payload temperature offer significant platform mass savings for both the Brayton and Rankine cycle platforms with either the pumped-loop or refrigerator payload cooling systems. Therefore, with respect to platform mass, the development of high temperature electrical equipment would be advantageous. 3 refs., 24 figs., 7 tabs.
Morrison, Glenn; Shaughnessy, Richard; Shu, Shi
2011-02-01
A Monte Carlo analysis of indoor ozone levels in four cities was applied to provide guidance to regulatory agencies on setting maximum ozone emission rates from consumer appliances. Measured distributions of air exchange rates, ozone decay rates and outdoor ozone levels at monitoring stations were combined with a steady-state indoor air quality model resulting in emission rate distributions (mg h -1) as a function of % of building hours protected from exceeding a target maximum indoor concentration of 20 ppb. Whole-year, summer and winter results for Elizabeth, NJ, Houston, TX, Windsor, ON, and Los Angeles, CA exhibited strong regional differences, primarily due to differences in air exchange rates. Infiltration of ambient ozone at higher average air exchange rates significantly reduces allowable emission rates, even though air exchange also dilutes emissions from appliances. For Houston, TX and Windsor, ON, which have lower average residential air exchange rates, emission rates ranged from -1.1 to 2.3 mg h -1 for scenarios that protect 80% or more of building hours from experiencing ozone concentrations greater than 20 ppb in summer. For Los Angeles, CA and Elizabeth, NJ, with higher air exchange rates, only negative emission rates were allowable to provide the same level of protection. For the 80th percentile residence, we estimate that an 8-h average limit concentration of 20 ppb would be exceeded, even in the absence of an indoor ozone source, 40 or more days per year in any of the cities analyzed. The negative emission rates emerging from the analysis suggest that only a zero-emission rate standard is prudent for Los Angeles, Elizabeth, NJ and other regions with higher summertime air exchange rates. For regions such as Houston with lower summertime air exchange rates, the higher emission rates would likely increase occupant exposure to the undesirable products of ozone reactions, thus reinforcing the need for zero-emission rate standard.
The mechanics of granitoid systems and maximum entropy production rates.
Hobbs, Bruce E; Ord, Alison
2010-01-13
A model for the formation of granitoid systems is developed involving melt production spatially below a rising isotherm that defines melt initiation. Production of the melt volumes necessary to form granitoid complexes within 10(4)-10(7) years demands control of the isotherm velocity by melt advection. This velocity is one control on the melt flux generated spatially just above the melt isotherm, which is the control valve for the behaviour of the complete granitoid system. Melt transport occurs in conduits initiated as sheets or tubes comprising melt inclusions arising from Gurson-Tvergaard constitutive behaviour. Such conduits appear as leucosomes parallel to lineations and foliations, and ductile and brittle dykes. The melt flux generated at the melt isotherm controls the position of the melt solidus isotherm and hence the physical height of the Transport/Emplacement Zone. A conduit width-selection process, driven by changes in melt viscosity and constitutive behaviour, operates within the Transport Zone to progressively increase the width of apertures upwards. Melt can also be driven horizontally by gradients in topography; these horizontal fluxes can be similar in magnitude to vertical fluxes. Fluxes induced by deformation can compete with both buoyancy and topographic-driven flow over all length scales and results locally in transient 'ponds' of melt. Pluton emplacement is controlled by the transition in constitutive behaviour of the melt/magma from elastic-viscous at high temperatures to elastic-plastic-viscous approaching the melt solidus enabling finite thickness plutons to develop. The system involves coupled feedback processes that grow at the expense of heat supplied to the system and compete with melt advection. The result is that limits are placed on the size and time scale of the system. Optimal characteristics of the system coincide with a state of maximum entropy production rate.
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
The tropical lapse rate steepened during the Last Glacial Maximum
Loomis, S.E.; Russell, J.M.; Verschuren, D.; Morrill, C.; De Cort, G.; Sinninghe Damsté, J.S.; Olago, D.; Eggermont, H.; Street-Perrott, F.A.; Kelly, M.A.
2017-01-01
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become lesssteep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountainenvironments. However, the sensitivity of the lapse rate to climate
The tropical lapse rate steepened during the Last Glacial Maximum
Loomis, Shannon E; Russell, James M; Verschuren, Dirk; Morrill, Carrie; De Cort, Gijs; Sinninghe Damsté, Jaap S|info:eu-repo/dai/nl/07401370X; Olago, Daniel; Eggermont, Hilde; Street-Perrott, F Alayne; Kelly, Meredith A
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become less steep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountain environments. However, the sensitivity of the lapse rate to climate
5 CFR 591.104 - Higher initial maximum uniform allowance rate.
2010-01-01
... specific items required for the basic uniform and the average total uniform cost for the affected employees... requirement. (e) So that OPM can evaluate agencies' use of this authority and provide the Congress and others... initial year a new style or type of minimum basic uniform is required for a category of employees, an...
Nitric-glycolic flowsheet testing for maximum hydrogen generation rate
Martino, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Newell, J. D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Williams, M. S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-03-01
The Defense Waste Processing Facility (DWPF) at the Savannah River Site is developing for implementation a flowsheet with a new reductant to replace formic acid. Glycolic acid has been tested over the past several years and found to effectively replace the function of formic acid in the DWPF chemical process. The nitric-glycolic flowsheet reduces mercury, significantly lowers the chemical generation of hydrogen and ammonia, allows purge reduction in the Sludge Receipt and Adjustment Tank (SRAT), stabilizes the pH and chemistry in the SRAT and the Slurry Mix Evaporator (SME), allows for effective adjustment of the SRAT/SME rheology, and is favorable with respect to melter flammability. The objective of this work was to perform DWPF Chemical Process Cell (CPC) testing at conditions that would bound the catalytic hydrogen production for the nitric-glycolic flowsheet.
Jan Werner; Eva Maria Griebeler
2014-01-01
We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which...
A Maximum Information Rate Quaternion Filter for Spacecraft Attitude Estimation
Reijneveld, J.; Maas, A.; Choukroun, D.; Kuiper, J.M.
2011-01-01
Building on previous works, this paper introduces a novel continuous-time stochastic optimal linear quaternion estimator under the assumptions of rate gyro measurements and of vector observations of the attitude. A quaternion observation model, which observation matrix is rank degenerate, is reduced
78 FR 13999 - Maximum Interest Rates on Guaranteed Farm Loans
2013-03-04
... have removed the term. Comment: Don't remove the ``average agricultural loan customer'' definition. The... the following methods: Federal eRulemaking Portal: Go to http://www.regulations.gov . Follow the.... Comment: FSA should let the market dictate what interest rate lenders charge guaranteed borrowers, rather...
Daniel, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Rudisill, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-07-17
As part of the Spent Nuclear Fuel (SNF) processing campaign, H-Canyon is planning to begin dissolving High Flux Isotope Reactor (HFIR) fuel in late FY17 or early FY18. Each HFIR fuel core contains inner and outer fuel elements which were fabricated from uranium oxide (U_{3}O_{8}) dispersed in a continuous Al phase using traditional powder metallurgy techniques. Fuels fabricated in this manner, like other SNF’s processed in H-Canyon, dissolve by the same general mechanisms with similar gas generation rates and the production of H_{2}. The HFIR fuel cores will be dissolved using a flowsheet developed by the Savannah River National Laboratory (SRNL) in either the 6.4D or 6.1D dissolver using a unique insert. Multiple cores will be charged to the same dissolver solution maximizing the concentration of dissolved Al. The recovered U will be down-blended into low-enriched U for subsequent use as commercial reactor fuel. During the development of the HFIR fuel dissolution flowsheet, the cycle time for the initial core was estimated at 28 to 40 h. Once the cycle is complete, H-Canyon personnel will open the dissolver and probe the HFIR insert wells to determine the height of any fuel fragments which did not dissolve. Before the next core can be charged to the dissolver, an analysis of the potential for H_{2} gas generation must show that the combined surface area of the fuel fragments and the subsequent core will not generate H_{2} concentrations in the dissolver offgas which exceeds 60% of the lower flammability limit (LFL) of H_{2} at 200 °C. The objective of this study is to identify the maximum fuel fragment height as a function of the Al concentration in the dissolving solution which will provide criteria for charging successive HFIR cores to an H-Canyon dissolver.
Why does steady-state magnetic reconnection have a maximum local rate of order 0.1?
Liu, Yi-Hsin; Guo, F; Daughton, W; Li, H; Cassak, P A; Shay, M A
2016-01-01
Simulations suggest collisionless steady-state magnetic reconnection of Harris-type current sheets proceeds with a rate of order 0.1, independent of dissipation mechanism. We argue this long-standing puzzle is a result of constraints at the magnetohydrodynamic (MHD) scale. We perform a scaling analysis of the reconnection rate as a function of the opening angle made by the upstream magnetic fields, finding a maximum reconnection rate close to 0.2. The predictions compare favorably to particle-in-cell simulations of relativistic electron-positron and non-relativistic electron-proton reconnection. The fact that simulated reconnection rates are close to the predicted maximum suggests reconnection proceeds near the most efficient state allowed at the MHD-scale. The rate near the maximum is relatively insensitive to the opening angle, potentially explaining why reconnection has a similar fast rate in differing models.
9 CFR 381.68 - Maximum inspection rates-New turkey inspection system.
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Maximum inspection rates-New turkey inspection system. 381.68 Section 381.68 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE... Procedures § 381.68 Maximum inspection rates—New turkey inspection system. (a) The maximum inspection...
Jan Werner
Full Text Available We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes strongly differed from Case's study (1978, which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles to 20 (fishes times (in comparison to mammals or even 45 (reptiles to 100 (fishes times (in comparison to birds lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule
Werner, Jan; Griebeler, Eva Maria
2014-01-01
We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles) to 20 (fishes) times (in comparison to mammals) or even 45 (reptiles) to 100 (fishes) times (in comparison to birds) lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule out either of
Esfandiar, Habib; KoraYem, Moharam Habibnejad [Islamic Azad University, Tehran (Iran, Islamic Republic of)
2015-09-15
In this study, the researchers try to examine nonlinear dynamic analysis and determine Dynamic load carrying capacity (DLCC) in flexible manipulators. Manipulator modeling is based on Timoshenko beam theory (TBT) considering the effects of shear and rotational inertia. To get rid of the risk of shear locking, a new procedure is presented based on mixed finite element formulation. In the method proposed, shear deformation is free from the risk of shear locking and independent of the number of integration points along the element axis. Dynamic modeling of manipulators will be done by taking into account small and large deformation models and using extended Hamilton method. System motion equations are obtained by using nonlinear relationship between displacements-strain and 2nd PiolaKirchoff stress tensor. In addition, a comprehensive formulation will be developed to calculate DLCC of the flexible manipulators during the path determined considering the constraints end effector accuracy, maximum torque in motors and maximum stress in manipulators. Simulation studies are conducted to evaluate the efficiency of the method proposed taking two-link flexible and fixed base manipulators for linear and circular paths into consideration. Experimental results are also provided to validate the theoretical model. The findings represent the efficiency and appropriate performance of the method proposed.
Kalafut, Bennett; Visscher, Koen
2008-10-01
Optical tweezers experiments allow us to probe the role of force and mechanical work in a variety of biochemical processes. However, observable states do not usually correspond in a one-to-one fashion with the internal state of an enzyme or enzyme-substrate complex. Different kinetic pathways yield different distributions for the dwells in the observable states. Furthermore, the dwell-time distribution will be dependent upon force, and upon where in the biochemical pathway force acts. I will present a maximum-likelihood method for identifying rate constants and the locations of force-dependent transitions in transcription initiation by T7 RNA Polymerase. This method is generalizable to systems with more complicated kinetic pathways in which there are two observable states (e.g. bound and unbound) and an irreversible final transition.
Asymptotic correctability of Bell-diagonal quantum states and maximum tolerable bit error rates
Ranade, K S; Ranade, Kedar S.; Alber, Gernot
2005-01-01
The general conditions are discussed which quantum state purification protocols have to fulfill in order to be capable of purifying Bell-diagonal qubit-pair states, provided they consist of steps that map Bell-diagonal states to Bell-diagonal states and they finally apply a suitably chosen Calderbank-Shor-Steane code to the outcome of such steps. As a main result a necessary and a sufficient condition on asymptotic correctability are presented, which relate this problem to the magnitude of a characteristic exponent governing the relation between bit and phase errors under the purification steps. These conditions allow a straightforward determination of maximum tolerable bit error rates of quantum key distribution protocols whose security analysis can be reduced to the purification of Bell-diagonal states.
2011-08-09
... From the Federal Register Online via the Government Publishing Office GENERAL SERVICES ADMINISTRATION Federal Travel Regulation (FTR); Relocation Allowances--Standard Mileage Rate for Moving Purposes AGENCY: Office of Governmentwide Policy, General Services Administration (GSA). ACTION: Notice of...
Statistical properties of the maximum Lyapunov exponent calculated via the divergence rate method.
Franchi, Matteo; Ricci, Leonardo
2014-12-01
The embedding of a time series provides a basic tool to analyze dynamical properties of the underlying chaotic system. To this purpose, the choice of the embedding dimension and lag is crucial. Although several methods have been devised to tackle the issue of the optimal setting of these parameters, a conclusive criterion to make the most appropriate choice is still lacking. An accepted procedure to rank different embedding methods relies on the evaluation of the maximum Lyapunov exponent (MLE) out of embedded time series that are generated by chaotic systems with explicit analytic representation. The MLE is evaluated as the local divergence rate of nearby trajectories. Given a system, embedding methods are ranked according to how close such MLE values are to the true MLE. This is provided by the so-called standard method in a way that exploits the mathematical description of the system and does not require embedding. In this paper we study the dependence of the finite-time MLE evaluated via the divergence rate method on the embedding dimension and lag in the case of time series generated by four systems that are widely used as references in the scientific literature. We develop a completely automatic algorithm that provides the divergence rate and its statistical uncertainty. We show that the uncertainty can provide useful information about the optimal choice of the embedding parameters. In addition, our approach allows us to find which systems provide suitable benchmarks for the comparison and ranking of different embedding methods.
Daniel L. Rabosky
2006-01-01
Full Text Available Rates of species origination and extinction can vary over time during evolutionary radiations, and it is possible to reconstruct the history of diversification using molecular phylogenies of extant taxa only. Maximum likelihood methods provide a useful framework for inferring temporal variation in diversification rates. LASER is a package for the R programming environment that implements maximum likelihood methods based on the birth-death process to test whether diversification rates have changed over time. LASER contrasts the likelihood of phylogenetic data under models where diversification rates have changed over time to alternative models where rates have remained constant over time. Major strengths of the package include the ability to detect temporal increases in diversification rates and the inference of diversification parameters under multiple rate-variable models of diversification. The program and associated documentation are freely available from the R package archive at http://cran.r-project.org.
13 CFR 107.845 - Maximum rate of amortization on Loans and Debt Securities.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum rate of amortization on... ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES Financing of Small Businesses by Licensees Structuring... rate of amortization on Loans and Debt Securities. The principal of any Loan (or the loan portion...
The Scaling of Maximum and Basal Metabolic Rates of Mammals and Birds
Barbosa, L A; Silva, J K L; Barbosa, Lauro A.; Garcia, Guilherme J. M.; Silva, Jafferson K. L. da
2004-01-01
Allometric scaling is one of the most pervasive laws in biology. Its origin, however, is still a matter of dispute. Recent studies have established that maximum metabolic rate scales with an exponent larger than that found for basal metabolism. This unpredicted result sets a challenge that can decide which of the concurrent hypotheses is the correct theory. Here we show that both scaling laws can be deduced from a single network model. Besides the 3/4-law for basal metabolism, the model predicts that maximum metabolic rate scales as $M^{6/7}$, maximum heart rate as $M^{-1/7}$, and muscular capillary density as $M^{-1/7}$, in agreement with data.
17 CFR 148.7 - Rulemaking on maximum rates for attorney fees.
2010-04-01
... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Rulemaking on maximum rates for attorney fees. 148.7 Section 148.7 Commodity and Securities Exchanges COMMODITY FUTURES TRADING... increase in the cost of living or by special circumstances (such as limited availability of...
The 220-age equation does not predict maximum heart rate in children and adolescents
Verschuren, Olaf; Maltais, Desiree B.; Takken, Tim
Our primary purpose was to provide maximum heart rate (HR(max)) values for ambulatory children with cerebral palsy (CP). The secondary purpose was to determine the effects of age, sex, ambulatory ability, height, and weight on HR(max). In 362 ambulatory children and adolescents with CP (213 males
The 220-age equation does not predict maximum heart rate in children and adolescents
Verschuren, Olaf; Maltais, Desiree B.; Takken, Tim
2011-01-01
Our primary purpose was to provide maximum heart rate (HR(max)) values for ambulatory children with cerebral palsy (CP). The secondary purpose was to determine the effects of age, sex, ambulatory ability, height, and weight on HR(max). In 362 ambulatory children and adolescents with CP (213 males an
Maximum initial growth-rate of strong-shock-driven Richtmyer-Meshkov instability
Dell, Z. R.; Pandian, A.; Bhowmick, A. K.; Swisher, N. C.; Stanic, M.; Stellingwerf, R. F.; Abarzhi, S. I.
2017-09-01
We focus on the classical problem of the dependence on the initial conditions of the initial growth-rate of strong shock driven Richtmyer-Meshkov instability (RMI) by developing a novel empirical model and by employing rigorous theories and Smoothed Particle Hydrodynamics simulations to describe the simulation data with statistical confidence in a broad parameter regime. For the given values of the shock strength, fluid density ratio, and wavelength of the initial perturbation of the fluid interface, we find the maximum value of the RMI initial growth-rate, the corresponding amplitude scale of the initial perturbation, and the maximum fraction of interfacial energy. This amplitude scale is independent of the shock strength and density ratio and is characteristic quantity of RMI dynamics. We discover the exponential decay of the ratio of the initial and linear growth-rates of RMI with the initial perturbation amplitude that excellently agrees with available data.
Effects of electric field on the maximum electro-spinning rate of silk fibroin solutions.
Park, Bo Kyung; Um, In Chul
2017-02-01
Owing to the excellent cyto-compatibility of silk fibroin (SF) and the simple fabrication of nano-fibrous webs, electro-spun SF webs have attracted much research attention in numerous biomedical fields. Because the production rate of electro-spun webs is strongly dependent on the electro-spinning rate used, the electro-spinning rate becomes more important. In the present study, to improve the electro-spinning rate of SF solutions, various electric fields were applied during electro-spinning of SF, and its effects on the maximum electro-spinning rate of SF solution as well as diameters and molecular conformations of the electro-spun SF fibers were examined. As the electric field was increased, the maximum electro-spinning rate of the SF solution also increased. The maximum electro-spinning rate of a 13% SF solution could be increased 12×by increasing the electric field from 0.5kV/cm (0.25mL/h) to 2.5kV/cm (3.0mL/h). The dependence of the fiber diameter on the present electric field was not significant when using less-concentrated SF solutions (7-9% SF). On the other hand, at higher SF concentrations the electric field had a greater effect on the resulting fiber diameter. The electric field had a minimal effect of the molecular conformation and crystallinity index of the electro-spun SF webs. Copyright © 2016 Elsevier B.V. All rights reserved.
Lin, Haifeng; Bai, Di; Gao, Demin; Liu, Yunfei
2016-07-30
In Rechargeable Wireless Sensor Networks (R-WSNs), in order to achieve the maximum data collection rate it is critical that sensors operate in very low duty cycles because of the sporadic availability of energy. A sensor has to stay in a dormant state in most of the time in order to recharge the battery and use the energy prudently. In addition, a sensor cannot always conserve energy if a network is able to harvest excessive energy from the environment due to its limited storage capacity. Therefore, energy exploitation and energy saving have to be traded off depending on distinct application scenarios. Since higher data collection rate or maximum data collection rate is the ultimate objective for sensor deployment, surplus energy of a node can be utilized for strengthening packet delivery efficiency and improving the data generating rate in R-WSNs. In this work, we propose an algorithm based on data aggregation to compute an upper data generation rate by maximizing it as an optimization problem for a network, which is formulated as a linear programming problem. Subsequently, a dual problem by introducing Lagrange multipliers is constructed, and subgradient algorithms are used to solve it in a distributed manner. At the same time, a topology controlling scheme is adopted for improving the network's performance. Through extensive simulation and experiments, we demonstrate that our algorithm is efficient at maximizing the data collection rate in rechargeable wireless sensor networks.
On the rate of convergence of the maximum likelihood estimator of a k-monotone density
WELLNER; Jon; A
2009-01-01
Bounds for the bracketing entropy of the classes of bounded k-monotone functions on [0,A] are obtained under both the Hellinger distance and the Lp(Q) distance,where 1 p < ∞ and Q is a probability measure on [0,A].The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a k-monotone density.
On the rate of convergence of the maximum likelihood estimator of a K-monotone density
GAO FuChang; WELLNER Jon A
2009-01-01
Bounds for the bracketing entropy of the classes of bounded K-monotone functions on [0, A] are obtained under both the Hellinger distance and the LP(Q) distance, where 1 ≤ p < ∞ and Q is a probability measure on [0, A]. The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a K-monotone density.
Arkharov, A. M.; Dontsova, E. S.; Lavrov, N. A.; Romanovskii, V. R.
2014-04-01
Maximum allowable (ultimate) currents stably passing through an YBa2Cu3O7 superconducting current-carrying element are determined as a function of a silver (or copper) coating thickness, external magnetic field induction, and cooling conditions. It is found that if a magnetic system based on yttrium ceramics is cooled by a cryogenic coolant, currents causing instabilities (instability onset currents) are almost independent of the coating thickness. If, however, liquid helium is used as a cooling agent, the ultimate current monotonically grows with the thickness of the stabilizing copper coating. It is shown that depending on cooling conditions, the stable values of the current and electric field strength preceding the occurrence of instability may be both higher and lower than the a priori chosen critical parameters of the superconductor. These features should be taken into account in selecting the stable value of the operating current of YBa2Cu3O7 superconducting windings.
Konstandinos G. Raptis
2012-01-01
Full Text Available Purpose of this study is the consideration of loading and contact problems encountered at rotating machine elements and especially at toothed gears. The later are some of the most commonly used mechanical components for rotary motion and power transmission. This fact proves the necessity for improved reliability and enhanced service life, which require precise and clear knowledge of the stress field at gear tooth. This study investigates the maximum allowable stresses occurring during spur gear tooth meshing computed using Niemannâs formulas at Highest Point of Single Tooth Contact (HPSTC. Gear material, module, power rating and number of teeth are considered as variable parameters. Furthermore, the maximum allowable stresses for maximum power transmission conditions are considered keeping the other parameters constant. After the application of Niemannâs formulas to both loading cases, the derived results are compared to the respective estimations of Finite Element Method (FEM using ANSYS software. Comparison of the results derived from Niemannâs formulas and FEM show that deviations between the two methods are kept at low level for both loading cases independently of the applied power (either random or maximum and the respective tangential load.
Daigger, Glen T; Siczka, John S; Smith, Thomas F; Frank, David A; McCorquodale, J A
The performance characteristics of relatively shallow (3.3 and 3.7 m sidewater depth in 30.5 m diameter) activated sludge secondary clarifiers were extensively evaluated during a 2-year testing program at the City of Akron Water Reclamation Facility (WRF), Ohio, USA. Testing included hydraulic and solids loading stress tests, and measurement of sludge characteristics (zone settling velocity (ZSV), dispersed and flocculated total suspended solids), and the results were used to calibrate computational fluid dynamic (CFD) models of the various clarifiers tested. The results demonstrated that good performance could be sustained at surface overflow rates in excess of 3 m/h, as long as the clarifier influent mixed liquor suspended solids (MLSS) concentration was controlled to below critical values. The limiting solids loading rate (SLR) was significantly lower than the value predicted by conventional solids flux analysis based on the measured ZSV/MLSS relationship. CFD analysis suggested that this resulted because mixed liquor entering the clarifier was being directed into the settled sludge blanket, diluting it and also creating a 'thin' concentration sludge blanket that overlays the thicker concentration sludge blanket typically expected. These results indicate the need to determine the allowable SLR for shallow clarifiers using approaches other than traditional solids flux analysis. A combination of actual testing and CFD analyses are demonstrated here to be effective in doing so.
A real-time maximum-likelihood heart-rate estimator for wearable textile sensors.
Cheng, Mu-Huo; Chen, Li-Chung; Hung, Ying-Che; Yang, Chang Ming
2008-01-01
This paper presents a real-time maximum-likelihood heart-rate estimator for ECG data measured via wearable textile sensors. The ECG signals measured from wearable dry electrodes are notorious for its susceptibility to interference from the respiration or the motion of wearing person such that the signal quality may degrade dramatically. To overcome these obstacles, in the proposed heart-rate estimator we first employ the subspace approach to remove the wandering baseline, then use a simple nonlinear absolute operation to reduce the high-frequency noise contamination, and finally apply the maximum likelihood estimation technique for estimating the interval of R-R peaks. A parameter derived from the byproduct of maximum likelihood estimation is also proposed as an indicator for signal quality. To achieve the goal of real-time, we develop a simple adaptive algorithm from the numerical power method to realize the subspace filter and apply the fast-Fourier transform (FFT) technique for realization of the correlation technique such that the whole estimator can be implemented in an FPGA system. Experiments are performed to demonstrate the viability of the proposed system.
Seymour, Roger S
2010-09-01
Effect of size of inflorescences, flowers and cones on maximum rate of heat production is analysed allometrically in 23 species of thermogenic plants having diverse structures and ranging between 1.8 and 600 g. Total respiration rate (, micromol s(-1)) varies with spadix mass (M, g) according to in 15 species of Araceae. Thermal conductance (C, mW degrees C(-1)) for spadices scales according to C = 18.5M(0.73). Mass does not significantly affect the difference between floral and air temperature. Aroids with exposed appendices with high surface area have high thermal conductance, consistent with the need to vaporize attractive scents. True flowers have significantly lower heat production and thermal conductance, because closed petals retain heat that benefits resident insects. The florets on aroid spadices, either within a floral chamber or spathe, have intermediate thermal conductance, consistent with mixed roles. Mass-specific rates of respiration are variable between species, but reach 900 nmol s(-1) g(-1) in aroid male florets, exceeding rates of all other plants and even most animals. Maximum mass-specific respiration appears to be limited by oxygen delivery through individual cells. Reducing mass-specific respiration may be one selective influence on the evolution of large size of thermogenic flowers.
Minimally allowed beta beata 0_nu rates from approximate flavor symmetries
Jenkins, James [Los Alamos National Laboratory
2008-01-01
Neutrinoless double beta decay ({beta}{beta}0{nu}) is the only realistic probe of Majorana neutrinos. In the standard scenario, dominated by light neutrino exchange, the process amplitude is proportional to m{sub ee} , the e - e element of the Majorana mass matrix. This is expected to hold true for small {beta}{beta}{nu} rates ({Gamma}{sub {beta}{beta}0{nu}}), even in the presence of new physics. Naively, current data allows for vanishing m{sub ee} , but this should be protected by an appropriate flavor symmetry. All such symmetries lead to mass matrices inconsistent with oscillation phenomenology. Hence, Majorana neutrinos imply nonzero {Gamma}{sub {beta}{beta}0{nu}}. I perform a spurion analysis to break all possible abelian symmetries that guarantee {Gamma}{sub {beta}{beta}0{nu}} = 0 and search for minimally allowed m{sub ee} values. Specifically, I survey 259 broken structures to yield m{sub ee} values and current phenomenological constraints under a variety of scenarios. This analysis also extracts predictions for both neutrino oscillation parameters and kinematic quantities. Assuming reasonable tuning levels, I find that m{sub ee} > 4 x 10{sup -6} eV at 99% confidence. Bounds below this value would indicate the Dirac neutrino nature or the existence of new light (eV-MeV scale) degrees of freedom that can potentially be probed elsewhere. This limit can be raised by improvements in neutrino parameter measurements, particularly of the reactor mixing angle, depending on the best fit parameter values. Such improvements will also significantly constrain the available model space and aid in future constructions.
Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Mandsberg, Lotte Frigaard
2008-01-01
that best describes data is a model taking into account the full covariance structure. An inference study is made in order to determine whether the growth rate of the five bacteria strains is the same. After applying a likelihood-ratio test to models with a full covariance structure, it is concluded...... that the specific growth rate is the same for all bacteria strains. This study highlights the importance of carrying out an explorative examination of residuals in order to make a correct parametrization of a model including the covariance structure. The ML method is shown to be a strong tool as it enables......The specific growth rate for P. aeruginosa and four mutator strains mutT, mutY, mutM and mutY–mutM is estimated by a suggested Maximum Likelihood, ML, method which takes the autocorrelation of the observation into account. For each bacteria strain, six wells of optical density, OD, measurements...
Maximum likelihood methods for investigating reporting rates of rings on hunter-shot birds
Conroy, M.J.; Morgan, B.J.T.; North, P.M.
1985-01-01
It is well known that hunters do not report 100% of the rings that they find on shot birds. Reward studies can be used to estimate what this reporting rate is, by comparison of recoveries of rings offering a monetary reward, to ordinary rings. A reward study of American Black Ducks (Anas rubripes) is used to illustrate the design, and to motivate the development of statistical models for estimation and for testing hypotheses of temporal and geographic variation in reporting rates. The method involves indexing the data (recoveries) and parameters (reporting, harvest, and solicitation rates) by geographic and temporal strata. Estimates are obtained under unconstrained (e.g., allowing temporal variability in reporting rates) and constrained (e.g., constant reporting rates) models, and hypotheses are tested by likelihood ratio. A FORTRAN program, available from the author, is used to perform the computations.
Determination of zero-coupon and spot rates from treasury data by maximum entropy methods
Gzyl, Henryk; Mayoral, Silvia
2016-08-01
An interesting and important inverse problem in finance consists of the determination of spot rates or prices of the zero coupon bonds, when the only information available consists of the prices of a few coupon bonds. A variety of methods have been proposed to deal with this problem. Here we present variants of a non-parametric method to treat with such problems, which neither imposes an analytic form on the rates or bond prices, nor imposes a model for the (random) evolution of the yields. The procedure consists of transforming the problem of the determination of the prices of the zero coupon bonds into a linear inverse problem with convex constraints, and then applying the method of maximum entropy in the mean. This method is flexible enough to provide a possible solution to a mispricing problem.
Perkell, J S; Hillman, R E; Holmberg, E B
1994-08-01
In previous reports, aerodynamic and acoustic measures of voice production were presented for groups of normal male and female speakers [Holmberg et al., J. Acoust. Soc. Am. 84, 511-529 (1988); J. Voice 3, 294-305 (1989)] that were used as norms in studies of voice disorders [Hillman et al., J. Speech Hear. Res. 32, 373-392 (1989); J. Voice 4, 52-63 (1990)]. Several of the measures were extracted from glottal airflow waveforms that were derived by inverse filtering a high-time-resolution oral airflow signal. Recently, the methods have been updated and a new study of additional subjects has been conducted. This report presents previous (1988) and current (1993) group mean values of sound pressure level, fundamental frequency, maximum airflow declination rate, ac flow, peak flow, minimum flow, ac-dc ratio, inferred subglottal air pressure, average flow, and glottal resistance. Statistical tests indicate overall group differences and differences for values of several individual parameters between the 1988 and 1993 studies. Some inter-study differences in parameter values may be due to sampling effects and minor methodological differences; however, a comparative test of 1988 and 1993 inverse filtering algorithms shows that some lower 1988 values of maximum flow declination rate were due at least in part to excessive low-pass filtering in the 1988 algorithm. The observed differences should have had a negligible influence on the conclusions of our studies of voice disorders.
Michael D. Hare
2014-09-01
Full Text Available A field trial in northeast Thailand during 2011–2013 compared the establishment and growth of 2 Panicum maximum cultivars, Mombasa and Tanzania, sown at seeding rates of 2, 4, 6, 8, 10 and 12 kg/ha. In the first 3 months of establishment, higher sowing rates produced significantly more DM than sowing at 2 kg/ha, but thereafter there were no significant differences in total DM production between sowing rates of 2–12 kg/ha. Lower sowing rates produced fewer tillers/m2 than higher sowing rates but these fewer tillers were significantly heavier than the more numerous smaller tillers produced by higher sowing rates. Mombasa produced 23% more DM than Tanzania in successive wet seasons (7,060 vs. 5,712 kg DM/ha from 16 June to 1 November 2011; and 16,433 vs. 13,350 kg DM/ha from 25 April to 24 October 2012. Both cultivars produced similar DM yields in the dry seasons (November–April, averaging 2,000 kg DM/ha in the first dry season and 1,750 kg DM/ha in the second dry season. Mombasa produced taller tillers (104 vs. 82 cm, longer leaves (60 vs. 47 cm, wider leaves (2 vs. 1.8 cm and heavier tillers (1 vs. 0.7 g than Tanzania but fewer tillers/m2 (260 vs. 304. If farmers improve soil preparation and place more emphasis on sowing techniques, there is potential to dramatically reduce seed costs.Keywords: Guinea grass, tillering, forage production, seeding rates, Thailand.DOI: 10.17138/TGFT(2246-253
Evangelia Karagianni
2016-04-01
Full Text Available By utilizing meteorological data such as relative humidity, temperature, pressure, rain rate and precipitation duration at eight (8 stations in Aegean Archipelagos from six recent years (2007 – 2012, the effect of the weather on Electromagnetic wave propagation is studied. The EM wave propagation characteristics depend on atmospheric refractivity and consequently on Rain-Rate which vary in time and space randomly. Therefore the statistics of radio refractivity, Rain-Rate and related propagation effects are of main interest. This work investigates the maximum value of rain rate in monthly rainfall records, for a 5 min interval comparing it with different values of integration time as well as different percentages of time. The main goal is to determine the attenuation level for microwave links based on local rainfall data for various sites in Greece (L-zone, namely Aegean Archipelagos, with a view on improved accuracy as compared with more generic zone data available. A measurement of rain attenuation for a link in the S-band has been carried out and the data compared with prediction based on the standard ITU-R method.
Phylogenetic prediction of the maximum per capita rate of population growth.
Fagan, William F; Pearson, Yanthe E; Larsen, Elise A; Lynch, Heather J; Turner, Jessica B; Staver, Hilary; Noble, Andrew E; Bewick, Sharon; Goldberg, Emma E
2013-07-22
The maximum per capita rate of population growth, r, is a central measure of population biology. However, researchers can only directly calculate r when adequate time series, life tables and similar datasets are available. We instead view r as an evolvable, synthetic life-history trait and use comparative phylogenetic approaches to predict r for poorly known species. Combining molecular phylogenies, life-history trait data and stochastic macroevolutionary models, we predicted r for mammals of the Caniformia and Cervidae. Cross-validation analyses demonstrated that, even with sparse life-history data, comparative methods estimated r well and outperformed models based on body mass. Values of r predicted via comparative methods were in strong rank agreement with observed values and reduced mean prediction errors by approximately 68 per cent compared with two null models. We demonstrate the utility of our method by estimating r for 102 extant species in these mammal groups with unknown life-history traits.
Alvah C. Stahlnecker IV
2008-12-01
Full Text Available A percentage of either measured or predicted maximum heart rate is commonly used to prescribe and measure exercise intensity. However, maximum heart rate in athletes may be greater during competition or training than during laboratory exercise testing. Thus, the aim of the present investigation was to determine if endurance-trained runners train and compete at or above laboratory measures of 'maximum' heart rate. Maximum heart rates were measured utilising a treadmill graded exercise test (GXT in a laboratory setting using 10 female and 10 male National Collegiate Athletic Association (NCAA division 2 cross-country and distance event track athletes. Maximum training and competition heart rates were measured during a high-intensity interval training day (TR HR and during competition (COMP HR at an NCAA meet. TR HR (207 ± 5.0 b·min-1; means ± SEM and COMP HR (206 ± 4 b·min-1 were significantly (p < 0.05 higher than maximum heart rates obtained during the GXT (194 ± 2 b·min-1. The heart rate at the ventilatory threshold measured in the laboratory occurred at 83.3 ± 2.5% of the heart rate at VO2 max with no differences between the men and women. However, the heart rate at the ventilatory threshold measured in the laboratory was only 77% of the maximal COMP HR or TR HR. In order to optimize training-induced adaptation, training intensity for NCAA division 2 distance event runners should not be based on laboratory assessment of maximum heart rate, but instead on maximum heart rate obtained either during training or during competition
On the probability of exceeding allowable leak rates through degraded steam generator tubes
Cizelj, L.; Sorsek, I. [Jozef Stefan Institute, Ljubljana (Slovenia); Riesch-Oppermann, H. [Forschungszentrum Karlsruhe (Germany)
1997-02-01
This paper discusses some possible ways of predicting the behavior of the total leak rate through the damaged steam generator tubes. This failure mode is of special concern in cases where most through-wall defects may remain In operation. A particular example is the application of alternate (bobbin coil voltage) plugging criterion to Outside Diameter Stress Corrosion Cracking at the tube support plate intersections. It is the authors aim to discuss some possible modeling options that could be applied to solve the problem formulated as: Estimate the probability that the sum of all individual leak rates through degraded tubes exceeds the predefined acceptable value. The probabilistic approach is of course aiming at reliable and computationaly bearable estimate of the failure probability. A closed form solution is given for a special case of exponentially distributed individual leak rates. Also, some possibilities for the use of computationaly efficient First and Second Order Reliability Methods (FORM and SORM) are discussed. The first numerical example compares the results of approximate methods with closed form results. SORM in particular shows acceptable agreement. The second numerical example considers a realistic case of NPP in Krsko, Slovenia.
Benício, Kadja; Dias, Fernando A. L.; Gualdi, Lucien P.; Aliverti, Andrea; Resqueti, Vanessa R.; Fregonezi, Guilherme A. F.
2015-01-01
OBJECTIVE: To assess the influence of diaphragmatic activation control (diaphC) on Sniff Nasal-Inspiratory Pressure (SNIP) and Maximum Relaxation Rate of inspiratory muscles (MRR) in healthy subjects. METHOD: Twenty subjects (9 male; age: 23 (SD=2.9) years; BMI: 23.8 (SD=3) kg/m2; FEV1/FVC: 0.9 (SD=0.1)] performed 5 sniff maneuvers in two different moments: with or without instruction on diaphC. Before the first maneuver, a brief explanation was given to the subjects on how to perform the sniff test. For sniff test with diaphC, subjects were instructed to perform intense diaphragm activation. The best SNIP and MRR values were used for analysis. MRR was calculated as the ratio of first derivative of pressure over time (dP/dtmax) and were normalized by dividing it by peak pressure (SNIP) from the same maneuver. RESULTS: SNIP values were significantly different in maneuvers with and without diaphC [without diaphC: -100 (SD=27.1) cmH2O/ with diaphC: -72.8 (SD=22.3) cmH2O; p<0.0001], normalized MRR values were not statistically different [without diaphC: -9.7 (SD=2.6); with diaphC: -8.9 (SD=1.5); p=0.19]. Without diaphC, 40% of the sample did not reach the appropriate sniff criteria found in the literature. CONCLUSION: Diaphragmatic control performed during SNIP test influences obtained inspiratory pressure, being lower when diaphC is performed. However, there was no influence on normalized MRR. PMID:26578254
Werner, Stefanie [Umweltbundesamt, Dessau-Rosslau (Germany). Fachgebiet II 2.3
2011-05-15
When offshore wind farms are constructed, every single pile is hammered into the sediment by a hydraulic hammer. Noise levels at Horns Reef wind farm were in the range of 235 dB. The noise may cause damage to the auditory system of marine mammals. The Federal Environmental Office therefore recommends the definition of maximum permissible noise levels. Further, care should be taken that no marine mammals are found in the immediate vicinity of the construction site. (AKB)
Optimum poultry litter rates for maximum profit vs. yield in cotton production
Cotton lint yield responds well to increasing rates of poultry litter fertilization, but little is known of how optimum rates for yield compare with optimum rates for profit. The objectives of this study were to analyze cotton lint yield response to poultry litter application rates, determine and co...
无
2008-01-01
Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.
On the maximum rate of change in sunspot number growth and the size of the sunspot cycle
Wilson, Robert M.
1990-01-01
Statistically significant correlations exist between the size (maximum amplitude) of the sunspot cycle and, especially, the maximum value of the rate of rise during the ascending portion of the sunspot cycle, where the rate of rise is computed either as the difference in the month-to-month smoothed sunspot number values or as the 'average rate of growth' in smoothed sunspot number from sunspot minimum. Based on the observed values of these quantities (equal to 10.6 and 4.63, respectively) as of early 1989, it is inferred that cycle 22's maximum amplitude will be about 175 + or - 30 or 185 + or - 10, respectively, where the error bars represent approximately twice the average error found during cycles 10-21 from the two fits.
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Ambarita, Himsar; Kishinami, Koki; Daimaruya, Mashashi; Tokura, Ikuo; Kawai, Hideki; Suzuki, Jun; Kobiyama, Mashayosi; Ginting, Armansyah
The present paper is a study on the optimum plate to plate spacing for maximum heat transfer rate from a flat plate type heat exchanger. The heat exchanger consists of a number of parallel flat plates. The working fluids are flowed at the same operational conditions, either fixed pressure head or fixed fan power input. Parallel and counter flow directions of the working fluids were considered. While the volume of the heat exchanger is kept constant, plate number was varied. Hence, the spacing between plates as well as heat transfer rate will vary and there exists a maximum heat transfer rate. The objective of this paper is to seek the optimum plate to plate spacing for maximum heat transfer rate. In order to solve the problem, analytical and numerical solutions have been carried out. In the analytical solution, the correlations of the optimum plate to plate spacing as a function of the non-dimensional parameters were developed. Furthermore, the numerical simulation is carried out to evaluate the correlations. The results show that the optimum plate to plate spacing for a counter flow heat exchanger is smaller than parallel flow ones. On the other hand, the maximum heat transfer rate for a counter flow heat exchanger is bigger than parallel flow ones.
Maximum Acceptable Vibrato Excursion as a Function of Vibrato Rate in Musicians and Non-musicians
Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels H.
and, in most listeners, exhibited a peak at medium vibrato rates (5–7 Hz). Large across-subject variability was observed, and no significant effect of musical experience was found. Overall, most listeners were not solely sensitive to the vibrato excursion and there was a listener-dependent rate...
Maximum Acceptable Vibrato Excursion as a Function of Vibrato Rate in Musicians and Non-musicians
Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels H.
2014-01-01
and, in most listeners, exhibited a peak at medium vibrato rates (5–7 Hz). Large across-subject variability was observed, and no significant effect of musical experience was found. Overall, most listeners were not solely sensitive to the vibrato excursion and there was a listener-dependent rate...
7 CFR 1.187 - Rulemaking on maximum rates for attorney fees.
2010-01-01
... the types of proceedings in which the rate should be used. It also should explain fully the reasons... certain types of proceedings), the Department may adopt regulations providing that attorney fees may be awarded at a rate higher than $125 per hour in some or all of the types of proceedings covered by...
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
无
2004-01-01
［1］McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.［2］Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.［3］Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.［4］Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.［5］Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.［6］Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.［7］Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.［8］Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.
McCarthy, C M; Taylor, M A; Dennis, M W
1987-01-01
Mycobacterium avium is a human pathogen which may cause either chronic or disseminated disease and the organism exhibits a slow rate of growth. This study provides information on the growth rate of the organism in chronically infected mice and its maximal growth rate in vitro. M. avium was grown in continuous culture, limited for nitrogen with 0.5 mM ammonium chloride and dilution rates that ranged from 0.054 to 0.153 h-1. The steady-state concentration of ammonia nitrogen and M. avium cells for each dilution rate were determined. The bacterial saturation constant for growth-limiting ammonia was 0.29 mM (4 micrograms nitrogen/ml) and, from this, the maximal growth rate for M. avium was estimated to be 0.206 h-1 or a doubling time of 3.4 h. BALB/c mice were infected intravenously with 3 x 10(6) colony-forming units and a chronic infection resulted, typical of virulent M. avium strains. During a period of 3 months, the number of mycobacteria remained constant in the lungs, but increased 30-fold and 8,900-fold, respectively, in the spleen and mesenteric lymph nodes. The latter increase appeared to be due to proliferation in situ. The generation time of M. avium in the mesenteric lymph nodes was estimated to be 7 days.
Quinn, T Alexander; Kohl, Peter
2016-12-01
Mechanical stimulation (MS) represents a readily available, non-invasive means of pacing the asystolic or bradycardic heart in patients, but benefits of MS at higher heart rates are unclear. Our aim was to assess the maximum rate and sustainability of excitation by MS vs. electrical stimulation (ES) in the isolated heart under normal physiological conditions. Trains of local MS or ES at rates exceeding intrinsic sinus rhythm (overdrive pacing; lowest pacing rates 2.5±0.5 Hz) were applied to the same mid-left ventricular free-wall site on the epicardium of Langendorff-perfused rabbit hearts. Stimulation rates were progressively increased, with a recovery period of normal sinus rhythm between each stimulation period. Trains of MS caused repeated focal ventricular excitation from the site of stimulation. The maximum rate at which MS achieved 1:1 capture was lower than during ES (4.2±0.2 vs. 5.9±0.2 Hz, respectively). At all overdrive pacing rates for which repetitive MS was possible, 1:1 capture was reversibly lost after a finite number of cycles, even though same-site capture by ES remained possible. The number of MS cycles until loss of capture decreased with rising stimulation rate. If interspersed with ES, the number of MS to failure of capture was lower than for MS only. In this study, we demonstrate that the maximum pacing rate at which MS can be sustained is lower than that for same-site ES in isolated heart, and that, in contrast to ES, the sustainability of successful 1:1 capture by MS is limited. The mechanism(s) of differences in MS vs. ES pacing ability, potentially important for emergency heart rhythm management, are currently unknown, thus warranting further investigation. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Cardiology.
Maximum Rate of Growth of Enstrophy in Solutions of the Fractional Burgers Equation
Yun, Dongfang
2016-01-01
This investigation is a part of a research program aiming to characterize the extreme behavior possible in hydrodynamic models by probing the sharpness of estimates on the growth of certain fundamental quantities. We consider here the rate of growth of the classical and fractional enstrophy in the fractional Burgers equation in the subcritical, critical and supercritical regime. First, we obtain estimates on these rates of growth and then show that these estimates are sharp up to numerical prefactors. In particular, we conclude that the power-law dependence of the enstrophy rate of growth on the fractional dissipation exponent has the same global form in the subcritical, critical and parts of the supercritical regime. This is done by numerically solving suitably defined constrained maximization problems and then demonstrating that for different values of the fractional dissipation exponent the obtained maximizers saturate the upper bounds in the estimates as the enstrophy increases. In addition, nontrivial be...
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
YUE Li; CHEN Xiru
2004-01-01
Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.
Riisgård, Hans Ulrik; Larsen, Poul Scheel; Pleissner, Daniel
2014-01-01
rate (F, l h-1), W (g), and L (mm) as described by the equations: FW = aWb and FL = cLd, respectively. This is done by using available and new experimental laboratory data on M. edulis obtained by members of the same research team using different methods and controlled diets of cultivated algal cells...
Maximum organic loading rate for the single-stage wet anaerobic digestion of food waste.
Nagao, Norio; Tajima, Nobuyuki; Kawai, Minako; Niwa, Chiaki; Kurosawa, Norio; Matsuyama, Tatsushi; Yusoff, Fatimah Md; Toda, Tatsuki
2012-08-01
Anaerobic digestion of food waste was conducted at high OLR from 3.7 to 12.9 kg-VS m(-3) day(-1) for 225 days. Periods without organic loading were arranged between the each loading period. Stable operation at an OLR of 9.2 kg-VS (15.0 kg-COD) m(-3) day(-1) was achieved with a high VS reduction (91.8%) and high methane yield (455 mL g-VS-1). The cell density increased in the periods without organic loading, and reached to 10.9×10(10) cells mL(-1) on day 187, which was around 15 times higher than that of the seed sludge. There was a significant correlation between OLR and saturated TSS in the sludge (y=17.3e(0.1679×), r(2)=0.996, P<0.05). A theoretical maximum OLR of 10.5 kg-VS (17.0 kg-COD) m(-3) day(-1) was obtained for mesophilic single-stage wet anaerobic digestion that is able to maintain a stable operation with high methane yield and VS reduction.
Validity of heart rate based nomogram fors estimation of maximum oxygen uptake in Indian population.
Kumar, S Krishna; Khare, P; Jaryal, A K; Talwar, A
2012-01-01
Maximal oxygen uptake (VO2max) during a graded maximal exercise test is the objective method to assess cardiorespiratory fitness. Maximal oxygen uptake testing is limited to only a few laboratories as it requires trained personnel and strenuous effort by the subject. At the population level, submaximal tests have been developed to derive VO2max indirectly based on heart rate based nomograms or it can be calculated using anthropometric measures. These heart rate based predicted standards have been developed for western population and are used routinely to predict VO2max in Indian population. In the present study VO2max was directly measured by maximal exercise test using a bicycle ergometer and was compared with VO2max derived by recovery heart rate in Queen's College step test (QCST) (PVO2max I) and with VO2max derived from Wasserman equation based on anthropometric parameters and age (PVO2max II) in a well defined age group of healthy male adults from New Delhi. The values of directly measured VO2max showed no significant correlation either with the estimated VO2max with QCST or with VO2max predicted by Wasserman equation. Bland and Altman method of approach for limit of agreement between VO2max and PVO2max I or PVO2max II revealed that the limits of agreement between directly measured VO2max and PVO2max I or PVO2max II was large indicating inapplicability of prediction equations of western population in the population under study. Thus it is evident that there is an urgent need to develop nomogram for Indian population, may be even for different ethnic sub-population in the country.
Longitudinal Examination of Age-Predicted Symptom-Limited Exercise Maximum Heart Rate
Zhu, Na; Suarez, Jose; Sidney, Steve; Sternfeld, Barbara; Schreiner, Pamela J.; Carnethon, Mercedes R.; Lewis, Cora E.; Crow, Richard S.; Bouchard, Claude; Haskell, William; Jacobs, David R.
2010-01-01
Purpose To estimate the association of age with maximal heart rate (MHR). Methods Data were obtained in the Coronary Artery Risk Development in Young Adults (CARDIA) study. Participants were black and white men and women aged 18-30 in 1985-86 (year 0). A symptom-limited maximal graded exercise test was completed at years 0, 7, and 20 by 4969, 2583, and 2870 participants, respectively. After exclusion 9622 eligible tests remained. Results In all 9622 tests, estimated MHR (eMHR, beats/minute) had a quadratic relation to age in the age range 18 to 50 years, eMHR=179+0.29*age-0.011*age2. The age-MHR association was approximately linear in the restricted age ranges of consecutive tests. In 2215 people who completed both year 0 and 7 tests (age range 18 to 37), eMHR=189–0.35*age; and in 1574 people who completed both year 7 and 20 tests (age range 25 to 50), eMHR=199–0.63*age. In the lowest baseline BMI quartile, the rate of decline was 0.20 beats/minute/year between years 0-7 and 0.51 beats/minute/year between years 7-20; while in the highest baseline BMI quartile there was a linear rate of decline of approximately 0.7 beats/minute/year over the full age of 18 to 50 years. Conclusion Clinicians making exercise prescriptions should be aware that the loss of symptom-limited MHR is much slower at young adulthood and more pronounced in later adulthood. In particular, MHR loss is very slow in those with lowest BMI below age 40. PMID:20639723
Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Mandsberg, Lotte Frigaard
2008-01-01
with an exponentially decaying function of the time between observations is suggested. A model with a full covariance structure containing OD-dependent variance and an autocorrelation structure is compared to a model with variance only and with no variance or correlation implemented. It is shown that the model...... are used for parameter estimation. The data is log-transformed such that a linear model can be applied. The transformation changes the variance structure, and hence an OD-dependent variance is implemented in the model. The autocorrelation in the data is demonstrated, and a correlation model...... that best describes data is a model taking into account the full covariance structure. An inference study is made in order to determine whether the growth rate of the five bacteria strains is the same. After applying a likelihood-ratio test to models with a full covariance structure, it is concluded...
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Snelling, Edward P; Seymour, Roger S; Matthews, Philip G D; Runciman, Sue; White, Craig R
2011-10-01
The hemimetabolous migratory locust Locusta migratoria progresses through five instars to the adult, increasing in size from 0.02 to 0.95 g, a 45-fold change. Hopping locomotion occurs at all life stages and is supported by aerobic metabolism and provision of oxygen through the tracheal system. This allometric study investigates the effect of body mass (Mb) on oxygen consumption rate (MO2, μmol h(-1)) to establish resting metabolic rate (MRO2), maximum metabolic rate during hopping (MMO2) and maximum metabolic rate of the hopping muscles (MMO2,hop) in first instar, third instar, fifth instar and adult locusts. Oxygen consumption rates increased throughout development according to the allometric equations MRO2=30.1Mb(0.83±0.02), MMO2=155Mb(1.01±0.02), MMO2,hop=120Mb(1.07±0.02) and, if adults are excluded, MMO2,juv=136Mb(0.97±0.02) and MMO2,juv,hop=103Mb(1.02±0.02). Increasing body mass by 20-45% with attached weights did not increase mass-specific MMO2 significantly at any life stage, although mean mass-specific hopping MO2 was slightly higher (ca. 8%) when juvenile data were pooled. The allometric exponents for all measures of metabolic rate are much greater than 0.75, and therefore do not support West, Brown and Enquist’s optimised fractal network model, which predicts that metabolism scales with a 3⁄4-power exponent owing to limitations in the rate at which resources can be transported within the body.
Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier
2011-10-01
Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/
Kruse, Marcelo Lapa; Kruse, José Cláudio Lupi; Leiria, Tiago Luiz Luz; Pires, Leonardo Martins; Gensas, Caroline Saltz; Gomes, Daniel Garcia; Boris, Douglas; Mantovani, Augusto; Lima, Gustavo Glotz de
2014-12-01
Occurrences of asymptomatic atrial fibrillation (AF) are common. It is important to identify AF because it increases morbidity and mortality. 24-hour Holter has been used to detect paroxysmal AF (PAF). The objective of this study was to investigate the relationship between occurrence of PAF in 24-hour Holter and the symptoms of the population studied. Cross-sectional study conducted at a cardiology hospital. 11,321 consecutive 24-hour Holter tests performed at a referral service were analyzed. Patients with pacemakers or with AF throughout the recording were excluded. There were 75 tests (0.67%) with PAF. The mean age was 67 ± 13 years and 45% were female. The heart rate (HR) over the 24 hours was a minimum of 45 ± 8 bpm, mean of 74 ± 17 bpm and maximum of 151 ± 32 bpm. Among the tests showing PAF, only 26% had symptoms. The only factor tested that showed a correlation with symptomatic AF was maximum HR (165 ± 34 versus 147 ± 30 bpm) (P = 0.03). Use of beta blockers had a protective effect against occurrence of PAF symptoms (odds ratio: 0.24, P = 0.031). PAF is a rare event in 24-hour Holter. The maximum HR during the 24 hours was the only factor correlated with symptomatic AF, and use of beta blockers had a protective effect against AF symptom occurrence.
Karia Ritesh M
2012-04-01
Full Text Available Objective: Objectives of this study is to study effect of smoking on Peak Expiratory Flow Rate and Maximum Voluntary Ventilation in apparently healthy tobacco smokers and non-smokers and to compare the result of both the studies to assess the effects of smoking Method: The present study was carried out by computerized software of Pulmonary Function Test named ‘Spiro Excel’ on 50 non-smokers and 50 smokers. Smokers are divided in three gropus. Full series of test take 4 to 5 minutes. Tests were compared in the both smokers and non-smokers group by the ‘unpaired t test’. Statistical significance was indicated by ‘p’ value < 0.05. Results: From the result it is found that actual value of Peak Expiratory Flow Rate and Maximum Voluntary Ventilation are significantly lower in all smokers group than non-smokers. The difference of actual mean value is increases as the degree of smoking increases. [National J of Med Res 2012; 2(2.000: 191-193
Siegler, Jason C; Marshall, Paul W M; Raftry, Sean; Brooks, Cristy; Dowswell, Ben; Romero, Rick; Green, Simon
2013-12-01
The purpose of this investigation was to assess the influence of sodium bicarbonate supplementation on maximal force production, rate of force development (RFD), and muscle recruitment during repeated bouts of high-intensity cycling. Ten male and female (n = 10) subjects completed two fixed-cadence, high-intensity cycling trials. Each trial consisted of a series of 30-s efforts at 120% peak power output (maximum graded test) that were interspersed with 30-s recovery periods until task failure. Prior to each trial, subjects consumed 0.3 g/kg sodium bicarbonate (ALK) or placebo (PLA). Maximal voluntary contractions were performed immediately after each 30-s effort. Maximal force (F max) was calculated as the greatest force recorded over a 25-ms period throughout the entire contraction duration while maximal RFD (RFD max) was calculated as the greatest 10-ms average slope throughout that same contraction. F max declined similarly in both the ALK and PLA conditions, with baseline values (ALK: 1,226 ± 393 N; PLA: 1,222 ± 369 N) declining nearly 295 ± 54 N [95% confidence interval (CI) = 84-508 N; P force vs. maximum rate of force development during a whole body fatiguing task.
Larson, Eric D.; St. Clair, Joshua R.; Sumner, Whitney A.; Bannister, Roger A.; Proenza, Cathy
2013-01-01
An inexorable decline in maximum heart rate (mHR) progressively limits human aerobic capacity with advancing age. This decrease in mHR results from an age-dependent reduction in “intrinsic heart rate” (iHR), which is measured during autonomic blockade. The reduced iHR indicates, by definition, that pacemaker function of the sinoatrial node is compromised during aging. However, little is known about the properties of pacemaker myocytes in the aged sinoatrial node. Here, we show that depressed excitability of individual sinoatrial node myocytes (SAMs) contributes to reductions in heart rate with advancing age. We found that age-dependent declines in mHR and iHR in ECG recordings from mice were paralleled by declines in spontaneous action potential (AP) firing rates (FRs) in patch-clamp recordings from acutely isolated SAMs. The slower FR of aged SAMs resulted from changes in the AP waveform that were limited to hyperpolarization of the maximum diastolic potential and slowing of the early part of the diastolic depolarization. These AP waveform changes were associated with cellular hypertrophy, reduced current densities for L- and T-type Ca2+ currents and the “funny current” (If), and a hyperpolarizing shift in the voltage dependence of If. The age-dependent reduction in sinoatrial node function was not associated with changes in β-adrenergic responsiveness, which was preserved during aging for heart rate, SAM FR, L- and T-type Ca2+ currents, and If. Our results indicate that depressed excitability of individual SAMs due to altered ion channel activity contributes to the decline in mHR, and thus aerobic capacity, during normal aging. PMID:24128759
Loyka, Sergey; Gagnon, Francois
2009-01-01
Motivated by a recent surge of interest in convex optimization techniques, convexity/concavity properties of error rates of the maximum likelihood detector operating in the AWGN channel are studied and extended to frequency-flat slow-fading channels. Generic conditions are identified under which the symbol error rate (SER) is convex/concave for arbitrary multi-dimensional constellations. In particular, the SER is convex in SNR for any one- and two-dimensional constellation, and also in higher dimensions at high SNR. Pairwise error probability and bit error rate are shown to be convex at high SNR, for arbitrary constellations and bit mapping. Universal bounds for the SER 1st and 2nd derivatives are obtained, which hold for arbitrary constellations and are tight for some of them. Applications of the results are discussed, which include optimum power allocation in spatial multiplexing systems, optimum power/time sharing to decrease or increase (jamming problem) error rate, an implication for fading channels ("fa...
Rezaeian Mahdi
2015-01-01
Full Text Available Containment of a transport cask during both normal and accident conditions is important to the health and safety of the public and of the operators. Based on IAEA regulations, releasable activity and maximum permissible volumetric leakage rate within the cask containing fuel samples of Tehran Research Reactor enclosed in an irradiated capsule are calculated. The contributions to the total activity from the four sources of gas, volatile, fines, and corrosion products are treated separately. These calculations are necessary to identify an appropriate leak test that must be performed on the cask and the results can be utilized as the source term for dose evaluation in the safety assessment of the cask.
Isacco, L; Thivel, D; Duclos, M; Aucouturier, J; Boisseau, N
2014-06-01
Fat mass localization affects lipid metabolism differently at rest and during exercise in overweight and normal-weight subjects. The aim of this study was to investigate the impact of a low vs high ratio of abdominal to lower-body fat mass (index of adipose tissue distribution) on the exercise intensity (Lipox(max)) that elicits the maximum lipid oxidation rate in normal-weight women. Twenty-one normal-weight women (22.0 ± 0.6 years, 22.3 ± 0.1 kg.m(-2)) were separated into two groups of either a low or high abdominal to lower-body fat mass ratio [L-A/LB (n = 11) or H-A/LB (n = 10), respectively]. Lipox(max) and maximum lipid oxidation rate (MLOR) were determined during a submaximum incremental exercise test. Abdominal and lower-body fat mass were determined from DXA scans. The two groups did not differ in aerobic fitness, total fat mass, or total and localized fat-free mass. Lipox(max) and MLOR were significantly lower in H-A/LB vs L-A/LB women (43 ± 3% VO(2max) vs 54 ± 4% VO(2max), and 4.8 ± 0.6 mg min(-1)kg FFM(-1)vs 8.4 ± 0.9 mg min(-1)kg FFM(-1), respectively; P normal-weight women, a predominantly abdominal fat mass distribution compared with a predominantly peripheral fat mass distribution is associated with a lower capacity to maximize lipid oxidation during exercise, as evidenced by their lower Lipox(max) and MLOR. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
diCenzo, George C; Sharthiya, Harsh; Nanda, Anish; Zamani, Maryam; Finan, Turlough M
2017-09-15
Maintenance of cellular phosphate homeostasis is essential for cellular life. The PhoU protein has emerged as a key regulator of this process in bacteria, and it is suggested to modulate phosphate import by PstSCAB and control activation of the phosphate limitation response by the PhoR-PhoB two-component system. However, a proper understanding of PhoU has remained elusive due to numerous complications of mutating phoU, including loss of viability and the genetic instability of the mutants. Here, we developed two sets of strains of Sinorhizobium meliloti that overcame these limitations and allowed a more detailed and comprehensive analysis of the biological and molecular activities of PhoU. The data showed that phoU cannot be deleted in the presence of phosphate unless PstSCAB is inactivated also. However, phoU deletions were readily recovered in phosphate-free media, and characterization of these mutants revealed that addition of phosphate to the environment resulted in toxic levels of PstSCAB-mediated phosphate accumulation. Phosphate uptake experiments indicated that PhoU significantly decreased the PstSCAB transport rate specifically in phosphate-replete cells but not in phosphate-starved cells and that PhoU could rapidly respond to elevated environmental phosphate concentrations and decrease the PstSCAB transport rate. Site-directed mutagenesis results suggested that the ability of PhoU to respond to phosphate levels was independent of the conformation of the PstSCAB transporter. Additionally, PhoU-PhoU and PhoU-PhoR interactions were detected using a bacterial two-hybrid screen. We propose that PhoU modulates PstSCAB and PhoR-PhoB in response to local, internal fluctuations in phosphate concentrations resulting from PstSCAB-mediated phosphate import.IMPORTANCE Correct maintenance of cellular phosphate homeostasis is critical in all kingdoms of life and in bacteria involves the PhoU protein. This work provides novel insights into the role of the Sinorhizobium
Jorge Cuadrado Reyes
2011-05-01
Full Text Available Abstract This research developed a logarithms for calculating the maximum heart rate (max. HR for players in team sports in game situations. The sample was made of thirteen players (aged 24 ± 3 to a Division Two Handball team. HR was initially measured by Course Navette test. Later, twenty one training sessions were conducted in which HR and Rate of Perceived Exertion (RPE, were continuously monitored, in each task. A lineal regression analysis was done to help find a max. HR prediction equation from the max. HR of the three highest intensity sessions. Results from this equation correlate significantly with data obtained in the Course Navette test and with those obtained by other indirect methods. The conclusion of this research is that this equation provides a very useful and easy way to measure the max. HR in real game situations, avoiding non-specific analytical tests and, therefore laboratory testing.. Key words: workout control, functional evaluation, prediction equation.
Ma, Jingxing; Mungoni, Lucy Jubeki; Verstraete, Willy; Carballa, Marta
2009-07-01
The maximum propionic acid (HPr) removal rate (R(HPr)) was investigated in two lab-scale Upflow Anaerobic Sludge Bed (UASB) reactors. Two feeding strategies were applied by modifying the hydraulic retention time (HRT) in the UASB(HRT) and the influent HPr concentration in the UASB(HPr), respectively. The experiment was divided into three main phases: phase 1, influent with only HPr; phase 2, HPr with macro-nutrients supplementation and phase 3, HPr with macro- and micro-nutrients supplementation. During phase 1, the maximum R(HPr) achieved was less than 3 g HPr-CODL(-1)d(-1) in both reactors. However, the subsequent supplementation of macro- and micro-nutrients during phases 2 and 3 allowed to increase the R(HPr) up to 18.1 and 32.8 g HPr-CODL(-1)d(-1), respectively, corresponding with an HRT of 0.5h in the UASB(HRT) and an influent HPr concentration of 10.5 g HPr-CODL(-1) in the UASB(HPr). Therefore, the high operational capacity of these reactor systems, specifically converting HPr with high throughput and high influent HPr level, was demonstrated. Moreover, the presence of macro- and micro-nutrients is clearly essential for stable and high HPr removal in anaerobic digestion.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
2010-07-01
... PREPARING TOMORROW'S TEACHERS TO USE TECHNOLOGY § 614.6 What is the maximum indirect cost rate for all... requirements; or (3) Charged by the grantee to another Federal award. (Authority: 20 U.S.C. 6832)...
Poryazov, V. A.; Krainov, A. Yu.
2016-05-01
A physicomathematical model of combustion of a metallized composite solid propellant based on ammonium perchlorate has been presented. The model takes account of the thermal effect of decomposition of a condensed phase (c phase), convection, diffusion, the exothermal chemical reaction in a gas phase, the heating and combustion of aluminum particles in the gas flow, and the velocity lag of the particles behind the gas. The influence of the granulometric composition of aluminum particles escaping from the combustion surface on the linear rate of combustion has been investigated. It has been shown that information not only on the kinetics of chemical reactions in the gas phase, but also on the granulometric composition of aluminum particles escaping from the surface of the c phase into the gas, is of importance for determination of the linear rate of combustion.
Sada, H
1978-10-01
Effects of phentolamine (13.3, 26.5 and 53.0 micron), alprenolol (3.5, 7.0 and 17.5 micron) and prenylamine (2.4, 4.8 and 11.9 micron) on the transmembrane potential were studied in isolated guinea-pig papillary muscles, superfused with Tyrode's solution. 1. Phentolamine, alprenolol and prenylamine reduced the maximum rate of rise of action potential (.Vmax) dose-dependently. Higher concentrations of phentolamine and prenylamine caused a loss of plateau in a majority of the preparations. Resting potential was not altered by any of the drugs. Readmittance of drug-free Tyrode's solution reversed these changes induced by 13.3 micron of phentolamine and all conconcentrations of alprenolol almost completely but those induced by higher concentrations of phentolamine and all concentrations of prenylamine only slightly. 2. .Vmax at steady state was increased with decreasing driving frequencies (0.5 and 0.25 Hz) and was decreased with increasing ones (2--5 Hz) in comparison with that at 1 Hz. Such changes were all exaggerated by the above drugs, particularly by prenylamine. 3. Prenylamine and, to a lesser degree, phentolamine and alprenolol delayed dose-dependently the recovery process of .Vmax in premature responses. 4. .Vmax in the first response after interruption of stimulation recovered toward the predrug value in the presence of the above three drugs. The time constants of recovery process ranged between 10.5 and 15.0s for phentolamine, between 4.5 and 15.5s for alprenolol. The time constant of the main component was estimated to be approximately 2s for the recovery process with prenylamine. 5. On the basis of the model recently proposed by Hondeghem and Katzung (1977), it is suggested that the drug molecules associate with the open sodium channels and dissociated slowly from the closed channels and that the inactivation parameter in the drug-associated channels is shifted in the hyperpolarizing direction.
Mazhar A. Memon
2016-04-01
Full Text Available ABSTRACT Objective: To evaluate correlation between visual prostate score (VPSS and maximum flow rate (Qmax in men with lower urinary tract symptoms. Material and Methods: This is a cross sectional study conducted at a university Hospital. Sixty-seven adult male patients>50 years of age were enrolled in the study after signing an informed consent. Qmax and voided volume recorded at uroflowmetry graph and at the same time VPSS were assessed. The education level was assessed in various defined groups. Pearson correlation coefficient was computed for VPSS and Qmax. Results: Mean age was 66.1±10.1 years (median 68. The mean voided volume on uroflowmetry was 268±160mL (median 208 and the mean Qmax was 9.6±4.96mLs/sec (median 9.0. The mean VPSS score was 11.4±2.72 (11.0. In the univariate linear regression analysis there was strong negative (Pearson's correlation between VPSS and Qmax (r=848, p<0.001. In the multiple linear regression analyses there was a significant correlation between VPSS and Qmax (β-http://www.blogapaixonadosporviagens.com.br/p/caribe.html after adjusting the effect of age, voided volume (V.V and level of education. Multiple linear regression analysis done for independent variables and results showed that there was no significant correlation between the VPSS and independent factors including age (p=0.27, LOE (p=0.941 and V.V (p=0.082. Conclusion: There is a significant negative correlation between VPSS and Qmax. The VPSS can be used in lieu of IPSS score. Men even with limited educational background can complete VPSS without assistance.
Blok, Chris; Jackson, Brian E; Guo, Xianfeng; de Visser, Pieter H B; Marcelis, Leo F M
2017-01-01
Growing on rooting media other than soils in situ -i.e., substrate-based growing- allows for higher yields than soil-based growing as transport rates of water, nutrients, and oxygen in substrate surpass those in soil. Possibly water-based growing allows for even higher yields as transport rates of water and nutrients in water surpass those in substrate, even though the transport of oxygen may be more complex. Transport rates can only limit growth when they are below a rate corresponding to maximum plant uptake. Our first objective was to compare Chrysanthemum growth performance for three water-based growing systems with different irrigation. We compared; multi-point irrigation into a pond (DeepFlow); one-point irrigation resulting in a thin film of running water (NutrientFlow) and multi-point irrigation as droplets through air (Aeroponic). Second objective was to compare press pots as propagation medium with nutrient solution as propagation medium. The comparison included DeepFlow water-rooted cuttings with either the stem 1 cm into the nutrient solution or with the stem 1 cm above the nutrient solution. Measurements included fresh weight, dry weight, length, water supply, nutrient supply, and oxygen levels. To account for differences in radiation sum received, crop performance was evaluated with Radiation Use Efficiency (RUE) expressed as dry weight over sum of Photosynthetically Active Radiation. The reference, DeepFlow with substrate-based propagation, showed the highest RUE, even while the oxygen supply provided by irrigation was potentially growth limiting. DeepFlow with water-based propagation showed 15-17% lower RUEs than the reference. NutrientFlow showed 8% lower RUE than the reference, in combination with potentially limiting irrigation supply of nutrients and oxygen. Aeroponic showed RUE levels similar to the reference and Aeroponic had non-limiting irrigation supply of water, nutrients, and oxygen. Water-based propagation affected the subsequent
Blok, Chris; Jackson, Brian E.; Guo, Xianfeng; Visser, De Pieter H.B.; Marcelis, Leo F.M.
2017-01-01
Growing on rooting media other than soils in situ -i.e., substrate-based growing- allows for higher yields than soil-based growing as transport rates of water, nutrients, and oxygen in substrate surpass those in soil. Possibly water-based growing allows for even higher yields as transport rates of
Human Resources Division
2001-01-01
HR Division wishes to clarify to members of the personnel that the allowance for a dependent child continues to be paid during all training courses ('stages'), apprenticeships, 'contrats de qualification', sandwich courses or other courses of similar nature. Any payment received for these training courses, including apprenticeships, is however deducted from the amount reimbursable as school fees. HR Division would also like to draw the attention of members of the personnel to the fact that any contract of employment will lead to the suppression of the child allowance and of the right to reimbursement of school fees.
34 CFR 694.9 - What is the maximum indirect cost rate for an agency of a State or local government?
2010-07-01
... for an agency of a State or local government? Notwithstanding 34 CFR 75.560-75.562 and 34 CFR 80.22, the maximum indirect cost rate that an agency of a State or local government receiving funds under... a State or local government? 694.9 Section 694.9 Education Regulations of the Offices of...
Lee, Sang-Yong; Ortega, Antonio
2000-04-01
We address the problem of online rate control in digital cameras, where the goal is to achieve near-constant distortion for each image. Digital cameras usually have a pre-determined number of images that can be stored for the given memory size and require limited time delay and constant quality for each image. Due to time delay restrictions, each image should be stored before the next image is received. Therefore, we need to define an online rate control that is based on the amount of memory used by previously stored images, the current image, and the estimated rate of future images. In this paper, we propose an algorithm for online rate control, in which an adaptive reference, a 'buffer-like' constraint, and a minimax criterion (as a distortion metric to achieve near-constant quality) are used. The adaptive reference is used to estimate future images and the 'buffer-like' constraint is required to keep enough memory for future images. We show that using our algorithm to select online bit allocation for each image in a randomly given set of images provides near constant quality. Also, we show that our result is near optimal when a minimax criterion is used, i.e., it achieves a performance close to that obtained by applying an off-line rate control that assumes exact knowledge of the images. Suboptimal behavior is only observed in situations where the distribution of images is not truly random (e.g., if most of the 'complex' images are captured at the end of the sequence.) Finally, we propose a T- step delay rate control algorithm and using the result of 1- step delay rate control algorithm, we show that this algorithm removes the suboptimal behavior.
Steffensen, John Fleng
2010-01-01
John Fleng Steffensen' and Anders Drud Jordan Aquaculture 2010 - San Diego - Physiological Insights Towards Improving Fish Culture. Hypoxia is an increasing problem in coastal near areas and estuaries. Hypoxia can also be a problem in aquaculture systems with a high degree of recirculating water...... and protein synthesis leading to the deposition and turnover of tissue components. Oxygen consumption of juvenile cod was measured with computerized intennittent respirometry at 10 0 C. A specially designed Plexiglas respirometer with a "chimney" in either end allowed feeding without disturbing and stressing...
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Young chicken and squab slaughter... INSPECTION REGULATIONS Operating Procedures § 381.67 Young chicken and squab slaughter inspection rate... inspector per minute under the traditional inspection procedure for the different young chicken and...
Dang, Cuong Cao; Le, Vinh Sy; Gascuel, Olivier; Hazes, Bart; Le, Quang Si
2014-10-24
Amino acid replacement rate matrices are a crucial component of many protein analysis systems such as sequence similarity search, sequence alignment, and phylogenetic inference. Ideally, the rate matrix reflects the mutational behavior of the actual data under study; however, estimating amino acid replacement rate matrices requires large protein alignments and is computationally expensive and complex. As a compromise, sub-optimal pre-calculated generic matrices are typically used for protein-based phylogeny. Sequence availability has now grown to a point where problem-specific rate matrices can often be calculated if the computational cost can be controlled. The most time consuming step in estimating rate matrices by maximum likelihood is building maximum likelihood phylogenetic trees from protein alignments. We propose a new procedure, called FastMG, to overcome this obstacle. The key innovation is the alignment-splitting algorithm that splits alignments with many sequences into non-overlapping sub-alignments prior to estimating amino acid replacement rates. Experiments with different large data sets showed that the FastMG procedure was an order of magnitude faster than without splitting. Importantly, there was no apparent loss in matrix quality if an appropriate splitting procedure is used. FastMG is a simple, fast and accurate procedure to estimate amino acid replacement rate matrices from large data sets. It enables researchers to study the evolutionary relationships for specific groups of proteins or taxa with optimized, data-specific amino acid replacement rate matrices. The programs, data sets, and the new mammalian mitochondrial protein rate matrix are available at http://fastmg.codeplex.com.
Schiefelbein, Sarah; Fröhlich, Alexander; John, Gernot T; Beutler, Falco; Wittmann, Christoph; Becker, Judith
2013-08-01
Dissolved oxygen plays an essential role in aerobic cultivation especially due to its low solubility. Under unfavorable conditions of mixing and vessel geometry it can become limiting. This, however, is difficult to predict and thus the right choice for an optimal experimental set-up is challenging. To overcome this, we developed a method which allows a robust prediction of the dissolved oxygen concentration during aerobic growth. This integrates newly established mathematical correlations for the determination of the volumetric gas-liquid mass transfer coefficient (kLa) in disposable shake-flasks from the filling volume, the vessel size and the agitation speed. Tested for the industrial production organism Corynebacterium glutamicum, this enabled a reliable design of culture conditions and allowed to predict the maximum possible cell concentration without oxygen limitation.
Gonzalez-Lopezlira, Rosa A; Kroupa, Pavel
2012-01-01
We analyze the relationship between maximum cluster mass, M_max, and surface densities of total gas (Sigma_gas), molecular gas (Sigma_H2) and star formation rate (Sigma_SFR) in the flocculent galaxy M33, using published gas data and a catalog of more than 600 young star clusters in its disk. By comparing the radial distributions of gas and most massive cluster masses, we find that M_max is proportional to Sigma_gas^4.7, M_max is proportional Sigma_H2^1.3, and M_max is proportional to Sigma_SFR^1.0. We rule out that these correlations result from the size of sample; hence, the change of the maximum cluster mass must be due to physical causes.
Lemaire, R.; Menanteau, S.
2016-01-01
This paper deals with the thorough characterization of a new experimental test bench designed to study the devolatilization and oxidation of pulverized fuel particles in a wide range of operating conditions. This lab-scale facility is composed of a fuel feeding system, the functioning of which has been optimized by computational fluid dynamics. It allows delivering a constant and time-independent mass flow rate of fuel particles which are pneumatically transported to the central injector of a hybrid McKenna burner using a carrier gas stream that can be inert or oxidant depending on the targeted application. A premixed propane/air laminar flat flame stabilized on the porous part of the burner is used to generate the hot gases insuring the heating of the central coal/carrier-gas jet with a thermal gradient similar to those found in industrial combustors (>105 K/s). In the present work, results issued from numerical simulations performed a priori to characterize the velocity and temperature fields in the reaction chamber have been analyzed and confronted with experimental measurements carried out by coupling particle image velocimetry, thermocouple and two-color pyrometry measurements so as to validate the order of magnitude of the heating rate delivered by such a new test bench. Finally, the main features of the flat flame reactor we developed have been discussed with respect to those of another laboratory-scale system designed to study coal devolatilization at a high heating rate.
Lemaire, R., E-mail: romain.lemaire@mines-douai.fr; Menanteau, S. [Mines Douai, EI, F-59508 Douai (France)
2016-01-15
This paper deals with the thorough characterization of a new experimental test bench designed to study the devolatilization and oxidation of pulverized fuel particles in a wide range of operating conditions. This lab-scale facility is composed of a fuel feeding system, the functioning of which has been optimized by computational fluid dynamics. It allows delivering a constant and time-independent mass flow rate of fuel particles which are pneumatically transported to the central injector of a hybrid McKenna burner using a carrier gas stream that can be inert or oxidant depending on the targeted application. A premixed propane/air laminar flat flame stabilized on the porous part of the burner is used to generate the hot gases insuring the heating of the central coal/carrier-gas jet with a thermal gradient similar to those found in industrial combustors (>10{sup 5} K/s). In the present work, results issued from numerical simulations performed a priori to characterize the velocity and temperature fields in the reaction chamber have been analyzed and confronted with experimental measurements carried out by coupling particle image velocimetry, thermocouple and two-color pyrometry measurements so as to validate the order of magnitude of the heating rate delivered by such a new test bench. Finally, the main features of the flat flame reactor we developed have been discussed with respect to those of another laboratory-scale system designed to study coal devolatilization at a high heating rate.
Gian Paolo Beretta
2008-08-01
Full Text Available A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Beretta, Gian P.
2008-09-01
A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Dieho, K; van den Bogert, B; Henderson, G; Bannink, A; Ramiro-Garcia, J; Smidt, H; Dijkstra, J
2017-04-01
Changes in rumen microbiota and in situ degradation kinetics were studied in 12 rumen-cannulated Holstein Friesian dairy cows during the dry period and early lactation. The effect of a rapid (RAP) or gradual (GRAD) postpartum (pp) rate of increase of concentrate allowance was also investigated. Cows were fed for ad libitum intake and had free access to a mixed ration consisting of chopped wheat straw (dry period only), grass silage, corn silage, and soybean meal. Treatment consisted of either a rapid (1.0 kg of dry matter/d; n = 6) or gradual (0.25 kg of dry matter/d; n = 6) increase of concentrate allowance (up to 10.9 kg of dry matter/d), starting at 4 d pp. In whole rumen contents, bacterial community composition was assessed using samples from 50, 30, and 10 d antepartum (ap), and 3, 9, 16, 30, 44, 60, and 80 d pp, and protozoal and archaeal community composition using samples from 10 d ap, and 16 and 44 d pp. Intake of fermentable organic matter, starch, and sugar was temporarily greater in RAP than GRAD at 16 d pp. Bacterial community richness was higher during the dry period than during the lactation. A rapid increase in concentrate allowance decreased bacterial community richness at 9 and 16 d pp compared with a gradual increase in concentrate allowance, whereas from 30 d pp onward richness of RAP and GRAD was similar. In general, the relative abundances of Bacteroidales and Aeromonadales were greater, and those of Clostridiales, Fibrobacterales, and Spirochaetales were smaller, during the lactation compared with the dry period. An interaction between treatment and sampling day was observed for some bacterial community members, and most of the protozoal and archaeal community members. Transition to lactation increased the relative abundance of Epidinium and Entodinium, but reduced the relative abundance of Ostracodinium. Archaea from genus Methanobrevibacter dominated during both the dry period and lactation. However, during lactation the abundance of the
Lovell, Dale I; Cuneo, Ross; Gass, Greg C
2010-06-01
This study examined the effect of strength training (ST) and short-term detraining on maximum force and rate of force development (RFD) in previously sedentary, healthy older men. Twenty-four older men (70-80 years) were randomly assigned to a ST group (n = 12) and C group (control, n = 12). Training consisted of three sets of six to ten repetitions on an incline squat at 70-90% of one repetition maximum three times per week for 16 weeks followed by 4 weeks of detraining. Regional muscle mass was assessed before and after training by dual-energy X-ray absorptiometry. Training increased RFD, maximum bilateral isometric force, and force in 500 ms, upper leg muscle mass and strength above pre-training values (14, 25, 22, 7, 90%, respectively; P force and RFD of older men. However, older individuals may lose some neuromuscular performance after a period of short-term detraining and that resistance exercise should be performed on a regular basis to maintain training adaptations.
Malekifarsani, A; Skachek, M A
2009-10-01
shown that the concentrations of the following radionuclides are limited by solubility and precipitate around the waste and buffer: U, Np, Ra, Sm, Zr, Se, Tc, and Pd. The Sensitivity of maximum release rates in case precipitation shows that some nuclides such as Cs-135, Nb-94, Nb-93 m, Zr-93, Sn-126, Th-230, Pu-240, Pu-242, Pu-239, Cm-245, Am-243, Cm-245, U-233, Ac-227, Pb-210, Pa-231 and Th-229 are very little changed in case the maximum release rate from EBS corresponding to eliminate precipitation in buffer material. Some nuclides such as Se-79, Tc-99, Pd-107, Th-232, U-236, U-233, Ra-226, Np-237 U-235, U-234, and U-238 are virtually changed in the maximum release rate compared to case that taking account precipitation. In Sensitivity of maximum release rates in case to taking account stable isotopes (according to the table of inventory) there are only some nuclides with their stable isotopes in the vitrified waste. And calculation shows that Pd-107 and Se-79 are very increase in case eliminate stable isotope. The Sensitivity of maximum release rates in case retardation with sorption shows that some nuclides such as Pu-240, Pu-241, Pu-239, Cm-245, Am-241, Cm-246, and Am-243 are increased in some time in case maximum release rate from EBS corresponding to eliminate retardation in buffer material. Some nuclides such as U-235, U-233 and U-236 have a little decrease in case maximum release because their parents have short live and before decay to their daughter will released from the EBS. If the characteristic time taken for a nuclide to diffuse across the buffer exceeds its half-life, then the release rate of that nuclide from the EBS will be attenuated by radioactive decay. Thus, the retardation of the diffusion process due to sorption tends to reduce the release rates of short-lived nuclides more effectively than for the long-lived ones. For example, release rates of Pu-240, Cm-246 and Am-241, which are relatively short-lived and strongly sorbing, are very small
Luis Eduardo Cruz-Martínez
2014-10-01
Full Text Available Background. The formulas to predict maximum heart rate have been used for many years in different populations. Objective. To verify the significance and the association of formulas of Tanaka and 220-age when compared to real maximum heart rate. Materials and methods. 30 subjects -22 men, 8 women- between 18 and 30 years of age were evaluated on a cycle ergometer and their real MHR values were statistically compared with the values of formulas currently used to predict MHR. Results. The results demonstrate that both Tanaka p=0.0026 and 220-age p=0.000003 do not predict real MHR, nor does a linear association exist between them. Conclusions. Due to the overestimation with respect to real MHR value that these formulas make, we suggest a correction of 6 bpm to the final result. This value represents the median of the difference between the Tanaka value and the real MHR. Both Tanaka (r=0.272 and 220-age (r=0.276 are not adequate predictors of MHR during exercise at the elevation of Bogotá in subjects of 18 to 30 years of age, although more study with a larger sample size is suggested.
Shaw, A; Takács, I; Pagilla, K R; Murthy, S
2013-10-15
The Monod equation is often used to describe biological treatment processes and is the foundation for many activated sludge models. The Monod equation includes a "half-saturation coefficient" to describe the effect of substrate limitations on the process rate and it is customary to consider this parameter to be a constant for a given system. The purpose of this study was to develop a methodology, and its use to show that the half-saturation coefficient for denitrification is not constant but is in fact a function of the maximum denitrification rate. A 4-step procedure is developed to investigate the dependency of half-saturation coefficients on the maximum rate and two different models are used to describe this dependency: (a) an empirical linear model and (b) a deterministic model based on Fick's law of diffusion. Both models are proved better for describing denitrification kinetics than assuming a fixed K(NO3) at low nitrate concentrations. The empirical model is more utilitarian whereas the model based on Fick's law has a fundamental basis that enables the intrinsic K(NO3) to be estimated. In this study data was analyzed from 56 denitrification rate tests and it was found that the extant K(NO3) varied between 0.07 mgN/L and 1.47 mgN/L (5th and 95th percentile respectively) with an average of 0.47 mgN/L. In contrast to this, the intrinsic K(NO3) estimated for the diffusion model was 0.01 mgN/L which indicates that the extant K(NO3) is greatly influenced by, and mostly describes, diffusion limitations.
Rosewarne, P J; Wilson, J M; Svendsen, J C
2016-01-01
Metabolic rate is one of the most widely measured physiological traits in animals and may be influenced by both endogenous (e.g. body mass) and exogenous factors (e.g. oxygen availability and temperature). Standard metabolic rate (SMR) and maximum metabolic rate (MMR) are two fundamental physiological variables providing the floor and ceiling in aerobic energy metabolism. The total amount of energy available between these two variables constitutes the aerobic metabolic scope (AMS). A laboratory exercise aimed at an undergraduate level physiology class, which details the appropriate data acquisition methods and calculations to measure oxygen consumption rates in rainbow trout Oncorhynchus mykiss, is presented here. Specifically, the teaching exercise employs intermittent flow respirometry to measure SMR and MMR, derives AMS from the measurements and demonstrates how AMS is affected by environmental oxygen. Students' results typically reveal a decline in AMS in response to environmental hypoxia. The same techniques can be applied to investigate the influence of other key factors on metabolic rate (e.g. temperature and body mass). Discussion of the results develops students' understanding of the mechanisms underlying these fundamental physiological traits and the influence of exogenous factors. More generally, the teaching exercise outlines essential laboratory concepts in addition to metabolic rate calculations, data acquisition and unit conversions that enhance competency in quantitative analysis and reasoning. Finally, the described procedures are generally applicable to other fish species or aquatic breathers such as crustaceans (e.g. crayfish) and provide an alternative to using higher (or more derived) animals to investigate questions related to metabolic physiology.
Gonzalez-Lopezlira, Rosa A; Kroupa, Pavel
2013-01-01
We analyze the relationship between maximum cluster mass, and surface densities of total gas (Sigma_gas), molecular gas (Sigma_H_2), neutral gas (Sigma_HI) and star formation rate (Sigma_SFR) in the grand design galaxy M51, using published gas data and a catalog of masses, ages, and reddenings of more than 1800 star clusters in its disk, of which 223 are above the cluster mass distribution function completeness limit. We find for clusters older than 25 Myr that M_3rd, the median of the 5 most massive clusters, is proportional to Sigma_HI^0.4. There is no correlation with Sigma_gas, Sigma_H2, or Sigma_SFR. For clusters younger than 10 Myr, M_3rd is proportional to Sigma_HI^0.6, M_3rd is proportional to Sigma_gas^0.5; there is no correlation with either Sigma_H_2 or Sigma_SFR. The results could hardly be more different than those found for clusters younger than 25 Myr in M33. For the flocculent galaxy M33, there is no correlation between maximum cluster mass and neutral gas, but M_3rd is proportional to Sigma_g...
Kuster, Nils; Cristol, Jean-Paul; Cavalier, Etienne; Bargnoux, Anne-Sophie; Halimi, Jean-Michel; Froissart, Marc; Piéroni, Laurence; Delanaye, Pierre
2014-01-20
The National Kidney Disease Education Program group demonstrated that MDRD equation is sensitive to creatinine measurement error, particularly at higher glomerular filtration rates. Thus, MDRD-based eGFR above 60 mL/min/1.73 m² should not be reported numerically. However, little is known about the impact of analytical error on CKD-EPI-based estimates. This study aimed at assessing the impact of analytical characteristics (bias and imprecision) of 12 enzymatic and 4 compensated Jaffe previously characterized creatinine assays on MDRD and CKD-EPI eGFR. In a simulation study, the impact of analytical error was assessed on a hospital population of 24084 patients. Ability using each assay to correctly classify patients according to chronic kidney disease (CKD) stages was evaluated. For eGFR between 60 and 90 mL/min/1.73 m², both equations were sensitive to analytical error. Compensated Jaffe assays displayed high bias in this range and led to poorer sensitivity/specificity for classification according to CKD stages than enzymatic assays. As compared to MDRD equation, CKD-EPI equation decreases impact of analytical error in creatinine measurement above 90 mL/min/1.73 m². Compensated Jaffe creatinine assays lead to important errors in eGFR and should be avoided. Accurate enzymatic assays allow estimation of eGFR until 90 mL/min/1.73 m² with MDRD and 120 mL/min/1.73 m² with CKD-EPI equation.
Method to Determine Maximum Allowable Sinterable Silver Interconnect Size
Wereszczak, A. A.; Modugno, M. C.; Waters, S. B.; DeVoto, D. J.; Paret, P. P.
2016-05-01
The use of sintered-silver for large-area interconnection is attractive for some large-area bonding applications in power electronics such as the bonding of metal-clad, electrically-insulating substrates to heat sinks. Arrays of different pad sizes and pad shapes have been considered for such large area bonding; however, rather than arbitrarily choosing their size, it is desirable to use the largest size possible where the onset of interconnect delamination does not occur. If that is achieved, then sintered-silver's high thermal and electrical conductivities can be fully taken advantage of. Toward achieving this, a simple and inexpensive proof test is described to identify the largest achievable interconnect size with sinterable silver. The method's objective is to purposely initiate failure or delamination. Copper and invar (a ferrous-nickel alloy whose coefficient of thermal expansion (CTE) is similar to that of silicon or silicon carbide) disks were used in this study and sinterable silver was used to bond them. As a consequence of the method's execution, delamination occurred in some samples during cooling from the 250 degrees C sintering temperature to room temperature and bonding temperature and from thermal cycling in others. These occurrences and their interpretations highlight the method's utility, and the herein described results are used to speculate how sintered-silver bonding will work with other material combinations.
ASSESSMENT OF MAXIMUM ALLOWABLE FISCAL BURDEN ON UKRAINE NATIONAL ECONOMY
M. Aleksandrova
2014-03-01
Full Text Available The article was reviewed reproductive aspect of the relationship between fiscal burden and a penchant for economic development under certain assumptions about the relationship of these variables. The analysis was based on a fairly simple dynamic model in which the share of income that goes to the development of production, relying constant. Computed optimal fiscal burden for the economic development of the country is 19.29% of GDP. The estimation and comparison of the calculations of the tax burden followed its dynamics, by comparative assessment with those of developed countries. The prospects of the proposed approach for predicting the development of national economy were analyzed.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Zaylaa, Amira; Oudjemia, Souad; Charara, Jamal; Girault, Jean-Marc
2015-09-01
This paper presents two new concepts for discrimination of signals of different complexity. The first focused initially on solving the problem of setting entropy descriptors by varying the pattern size instead of the tolerance. This led to the search for the optimal pattern size that maximized the similarity entropy. The second paradigm was based on the n-order similarity entropy that encompasses the 1-order similarity entropy. To improve the statistical stability, n-order fuzzy similarity entropy was proposed. Fractional Brownian motion was simulated to validate the different methods proposed, and fetal heart rate signals were used to discriminate normal from abnormal fetuses. In all cases, it was found that it was possible to discriminate time series of different complexity such as fractional Brownian motion and fetal heart rate signals. The best levels of performance in terms of sensitivity (90%) and specificity (90%) were obtained with the n-order fuzzy similarity entropy. However, it was shown that the optimal pattern size and the maximum similarity measurement were related to intrinsic features of the time series.
1993-07-01
This document provides an analysis of the potential impacts associated with the proposed action, which is continued operation of Naval Petroleum Reserve No. I (NPR-1) at the Maximum Efficient Rate (MER) as authorized by Public law 94-258, the Naval Petroleum Reserves Production Act of 1976 (Act). The document also provides a similar analysis of alternatives to the proposed action, which also involve continued operations, but under lower development scenarios and lower rates of production. NPR-1 is a large oil and gas field jointly owned and operated by the federal government and Chevron U.SA Inc. (CUSA) pursuant to a Unit Plan Contract that became effective in 1944; the government`s interest is approximately 78% and CUSA`s interest is approximately 22%. The government`s interest is under the jurisdiction of the United States Department of Energy (DOE). The facility is approximately 17,409 acres (74 square miles), and it is located in Kern County, California, about 25 miles southwest of Bakersfield and 100 miles north of Los Angeles in the south central portion of the state. The environmental analysis presented herein is a supplement to the NPR-1 Final Environmental Impact Statement of that was issued by DOE in 1979 (1979 EIS). As such, this document is a Supplemental Environmental Impact Statement (SEIS).
Pérez-Prieto, L A; Peyraud, J L; Delagarde, R
2011-07-01
Feed costs in dairy production systems may be decreased by extending the grazing season to periods such as autumn when grazing low-mass pastures is highly probable. The aim of this autumn study was to determine the effect of corn silage supplementation [0 vs. 8 kg of dry matter (DM) of a mixture 7:1 of corn silage and soybean meal] on pasture intake (PI), milk production, and grazing behavior of dairy cows grazing low-mass ryegrass pastures at 2 daily pasture allowances (PA; low PA=18 vs. high PA=30 kg of DM/cow above 2.5 cm). Twelve multiparous Holstein cows were used in a 4 × 4 Latin square design with 14-d periods. Pre-grazing pasture mass and pre-grazing plate meter pasture height averaged 1.8 t of DM/ha (above 2.5 cm) and 6.3 cm, respectively. The quality of the offered pasture (above 2.5 cm) was low because of dry conditions before and during the experiment (crude protein=11.5% of DM; net energy for lactation=5.15 MJ/kg of DM; organic matter digestibility=61.9%). The interaction between PA and supplementation level was significant for PI but not for milk production. Supplementation decreased PI from 11.6 to 7.6 kg of DM/d at low PA and from 13.1 to 7.3 kg of DM/d at high PA. The substitution rate was, therefore, lower at low than at high PA (0.51 vs. 0.75). Pasture intake increased with increasing PA in unsupplemented treatments, and was not affected by PA in supplemented treatments. Milk production averaged 13.5 kg/d and was greater at high than at low PA (+1.4 kg/d) and in supplemented than unsupplemented treatments (+5.2 kg/d). Milk fat concentration averaged 4.39% and was similar between treatments. Milk protein concentration increased from 3.37 to 3.51% from unsupplemented to supplemented treatments, and did not vary according to PA. Grazing behavior parameters were only affected by supplementation. On average, daily grazing time decreased (539 vs. 436 min) and daily ruminating time increased (388 vs. 486 min) from 0 to 8 kg of supplement DM. The PI
Abadi, Ali Salehi Sahl; Mazlomi, Adel; Saraji, Gebraeil Nasl; Zeraati, Hojjat; Hadian, Mohammad Reza; Jafari, Amir Homayoun
2015-10-01
In spite of the widespread use of automation in industry, manual material handling (MMH) is still performed in many occupational settings. The emphasis on ergonomics in MMH tasks is due to the potential risks of workplace accidents and injuries. This study aimed to assess the effect of box size, frequency of lift, and height of lift on maximum acceptable weight of lift (MAWL) on the heart rates of male university students in Iran. This experimental study was conducted in 2015 with 15 male students recruited from Tehran University of Medical Sciences. Each participant performed 18 different lifting tasks that involved three lifting frequencies (1lift/min, 4.3 lifts/min and 6.67 lifts/min), three lifting heights (floor to knuckle, knuckle to shoulder, and shoulder to arm reach), and two box sizes. Each set of experiments was conducted during the 20 min work period using the free-style lifting technique. The working heart rates (WHR) were recorded for the entire duration. In this study, we used SPSS version 18 software and descriptive statistical methods, analysis of variance (ANOVA), and the t-test for data analysis. The results of the ANOVA showed that there was a significant difference between the mean of MAWL in terms of frequencies of lifts (p = 0.02). Tukey's post hoc test indicated that there was a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0. 01). There was a significant difference between the mean heart rates in terms of frequencies of lifts (p = 0.006), and Tukey's post hoc test indicated a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0.004). But, there was no significant difference between the mean of MAWL and the mean heart rate in terms of lifting heights (p > 0.05). The results of the t-test showed that there was a significant difference between the mean of MAWL and the mean heart rate in terms of the sizes of the two boxes (p = 0.000). Based on the results of
Kundu Antara
2013-01-01
Full Text Available The present paper deals with an economic order quantity (EOQ model of an inventory problem with alternating demand rate: (i For a certain period, the demand rate is a non linear function of the instantaneous inventory level. (ii For the rest of the cycle, the demand rate is time dependent. The time at which demand rate changes, may be deterministic or uncertain. The deterioration rate of the item is time dependent. The holding cost and shortage cost are taken as a linear function of time. The total cost function per unit time is obtained. Finally, the model is solved using a gradient based non-linear optimization technique (LINGO and is illustrated by a numerical example.
Walker, Anthony P.; Quaife, Tristan; Van Bodegom, Peter M.; De Kauwe, Martin G.; Keenan, Trevor F.; Joiner, Joanna; Lomas, Mark R.; MacBean, Natasha; Xu, Chongang; Yang, Xiaojuan;
2017-01-01
The maximum photosynthetic carboxylation rate (V (sub cmax)) is an influential plant trait that has multiple scaling hypotheses, which is a source of uncertainty in predictive understanding of global gross primary production (GPP). Four trait-scaling hypotheses (plant functional type, nutrient limitation, environmental filtering, and plant plasticity) with nine specific implementations were used to predict global V(sub cmax) distributions and their impact on global GPP in the Sheffield Dynamic Global Vegetation Model (SDGVM). Global GPP varied from 108.1 to 128.2 petagrams of Carbon (PgC) per year, 65 percent of the range of a recent model intercomparison of global GPP. The variation in GPP propagated through to a 27percent coefficient of variation in net biome productivity (NBP). All hypotheses produced global GPP that was highly correlated (r equals 0.85-0.91) with three proxies of global GPP. Plant functional type-based nutrient limitation, underpinned by a core SDGVM hypothesis that plant nitrogen (N) status is inversely related to increasing costs of N acquisition with increasing soil carbon, adequately reproduced global GPP distributions. Further improvement could be achieved with accurate representation of water sensitivity and agriculture in SDGVM. Mismatch between environmental filtering (the most data-driven hypothesis) and GPP suggested that greater effort is needed understand V(sub cmax) variation in the field, particularly in northern latitudes.
Modal dispersion, pulse broadening and maximum transmission rate in GRIN optical fibers encompass a central dip in the core index profile
El-Diasty, Fouad; El-Hennawi, H. A.; El-Ghandoor, H.; Soliman, Mona A.
2013-12-01
Intermodal and intramodal dispersions signify one of the problems in graded-index multi-mode optical fibers (GRIN) used for LAN communication systems and for sensing applications. A central index dip (depression) in the profile of core refractive-index may occur due to the CVD fabrication processes. The index dip may also be intentionally designed to broaden the fundamental mode field profile toward a plateau-like distribution, which have advantages for fiber-source connections, fiber amplifiers and self-imaging applications. Effect of core central index dip on the propagation parameters of GRIN fiber, such as intermodal dispersion, intramodal dispersion and root-mean-square broadening, is investigated. The conventional methods usually study optical signal propagation in optical fiber in terms of mode characteristics and the number of modes, but in this work multiple-beam Fizeau interferometry is proposed as an inductive but alternative methodology to afford a radial approach to determine dispersion, pulse broadening and maximum transmission rate in GRIN optical fiber having a central index dip.
Su, Yu-min; Makinia, Jacek; Pagilla, Krishna R
2008-04-01
The autotrophic maximum specific growth rate constant, muA,max, is the critical parameter for design and performance of nitrifying activated sludge systems. In literature reviews (i.e., Henze et al., 1987; Metcalf and Eddy, 1991), a wide range of muA,max values have been reported (0.25 to 3.0 days(-1)); however, recent data from several wastewater treatment plants across North America revealed that the estimated muA,max values remained in the narrow range 0.85 to 1.05 days(-1). In this study, long-term operation of a laboratory-scale sequencing batch reactor system was investigated for estimating this coefficient according to the low food-to-microorganism ratio bioassay and simulation methods, as recommended in the Water Environment Research Foundation (Alexandria, Virginia) report (Melcer et al., 2003). The estimated muA,max values using steady-state model calculations for four operating periods ranged from 0.83 to 0.99 day(-1). The International Water Association (London, United Kingdom) Activated Sludge Model No. 1 (ASM1) dynamic model simulations revealed that a single value of muA,max (1.2 days(-1)) could be used, despite variations in the measured specific nitrification rates. However, the average muA,max was gradually decreasing during the activated sludge chlorination tests, until it reached the value of 0.48 day(-1) at the dose of 5 mg chlorine/(g mixed liquor suspended solids x d). Significant discrepancies between the predicted XA/YA ratios were observed. In some cases, the ASM1 predictions were approximately two times higher than the steady-state model predictions. This implies that estimating this ratio from a complex activated sludge model and using it in simple steady-state model calculations should be accepted with great caution and requires further investigation.
Gonzalez-Lopezlira, Rosa A. [On sabbatical leave from the Centro de Radioastronomia y Astrofisica, UNAM, Campus Morelia, Michoacan, C.P. 58089, Mexico. (Mexico); Pflamm-Altenburg, Jan; Kroupa, Pavel, E-mail: r.gonzalez@crya.unam.mx [Argelander Institut fuer Astronomie, Universitaet Bonn, Auf dem Huegel 71, D-53121 Bonn (Germany)
2013-06-20
We analyze the relationship between maximum cluster mass and surface densities of total gas ({Sigma}{sub gas}), molecular gas ({Sigma}{sub H{sub 2}}), neutral gas ({Sigma}{sub H{sub I}}), and star formation rate ({Sigma}{sub SFR}) in the grand-design galaxy M51, using published gas data and a catalog of masses, ages, and reddenings of more than 1800 star clusters in its disk, of which 223 are above the cluster mass distribution function completeness limit. By comparing the two-dimensional distribution of cluster masses and gas surface densities, we find for clusters older than 25 Myr that M{sub 3rd}{proportional_to}{Sigma}{sub H{sub I}{sup 0.4{+-}0.2}}, whereM{sub 3rd} is the median of the five most massive clusters. There is no correlation with{Sigma}{sub gas},{Sigma}{sub H2}, or{Sigma}{sub SFR}. For clusters younger than 10 Myr, M{sub 3rd}{proportional_to}{Sigma}{sub H{sub I}{sup 0.6{+-}0.1}} and M{sub 3rd}{proportional_to}{Sigma}{sub gas}{sup 0.5{+-}0.2}; there is no correlation with either {Sigma}{sub H{sub 2}} or{Sigma}{sub SFR}. The results could hardly be more different from those found for clusters younger than 25 Myr in M33. For the flocculent galaxy M33, there is no correlation between maximum cluster mass and neutral gas, but we have determined M{sub 3rd}{proportional_to}{Sigma}{sub gas}{sup 3.8{+-}0.3}, M{sub 3rd}{proportional_to}{Sigma}{sub H{sub 2}{sup 1.2{+-}0.1}}, and M{sub 3rd}{proportional_to}{Sigma}{sub SFR}{sup 0.9{+-}0.1}. For the older sample in M51, the lack of tight correlations is probably due to the combination of strong azimuthal variations in the surface densities of gas and star formation rate, and the cluster ages. These two facts mean that neither the azimuthal average of the surface densities at a given radius nor the surface densities at the present-day location of a stellar cluster represent the true surface densities at the place and time of cluster formation. In the case of the younger sample, even if the clusters have not yet
Chiba Shigeru
2007-09-01
Full Text Available Abstract Background Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. Methods An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index ρmax, which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. Results The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in ρmax with time. Conclusion The physiological index, ρmax, will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.
Hubbard, S. M.; Coutts, D. S.; Matthews, W.; Guest, B.; Bain, H.
2015-12-01
In basins adjacent to continually active arcs, detrital zircon geochronology can be used to establish a high-resolution chronostratigraphic framework for deep-time strata. Large-nU-Pb geochronological datasets can yield a statistically significant signature from the youngest sub-population of detrital zircons, which we deduce from maximum depositional age (MDA) calculations. MDA is determined through numerous methods such as the mean age of three or more overlapping grain ages at 2σ error, favored in this analysis. Positive identification of the youngest detrital zircon population in a rock is the limiting factor on precision and resolution. The Campanian-Paleogene Nanaimo Group of B.C., Canada, was deposited in a forearc basin, outboard of the Coast Mountain Batholith. The record of a deep-water sediment-routing system is exhumed at Denman and Hornby islands; sandstone- and conglomerate- dominated strata compose a composite sedimentary unit 20 km across and 1.5 km thick, in strike section. Volcanic ashes are absent from the succession, which has been constrained biostratigraphically. Eleven detrital zircon samples are analyzed to define stratigraphic architecture and provide insight into sedimentation rates. Our dataset (n=3081) constrains the overall duration of channelization to ~18 Ma. A series of at least five distinct composite channel fills 3-6 km wide and 400-600 m thick are identified. The MDA of these units are statistically distinct and constrained to better than 3% precision. Sedimentation rates amongst the channel fills increase upward, from 60-100 m/Ma to >500 m/Ma. This is likely linked to the tendency of a slope channel system to be dominated by sediment bypass early in its evolution, and later dominated by aggradation as large-scale levees develop. Channel processes were not continuous, with the longest hiatus ~6 Ma. The large-n detrital zircon dataset provides unprecedented insight into long-term sediment routing, evidence for which is
Burger, Joanna
2013-01-01
Studies of fish consumption focus on recreational or subsistence fishing, on awareness and adherence to advisories, consumption patterns, and contaminants in fish. Yet the general public obtains their fish from commercial sources. In this paper I examine fish consumption patterns of recreational fishermen in New Jersey to determine: 1) consumption rates for self-caught fish and for other fish, 2) meals consumed per year, 3) average meal size, and average daily intake of mercury, and 4) variat...
Burger, Joanna
2013-01-01
Studies of fish consumption focus on recreational or subsistence fishing, on awareness and adherence to advisories, consumption patterns, and contaminants in fish. Yet the general public obtains their fish from commercial sources. In this paper I examine fish consumption patterns of recreational fishermen in New Jersey to determine: 1) consumption rates for self-caught fish and for other fish, 2) meals consumed per year, 3) average meal size, and average daily intake of mercury, and 4) variations in these parameters for commonly-consumed fish, and different methods of computing intake. Over 300 people were interviewed at fishing sites and fishing clubs along the New Jersey shore. Consumption patterns of anglers varied by species of fish. From 2 to 90 % of the anglers ate the different fish species, and between 9 and 75 % gave fish away to family or friends. Self-caught fish made up 7 to 92 % of fish diets. On average, self-caught fish were eaten for only 2 to 6 months of the year, whereas other fish (commercial or restaurant) were eaten up to 10 months a year. Anglers consumed from 5 to 36 meals of different fish a year, which resulted in intake of mercury ranging from 0.01 to 0.22 ug/kg/day. Average intake of Mako shark, swordfish, and tuna (sushi, canned tuna, self-caught tuna) exceeded the U.S. Environmental Protection Agency's oral, chronic reference dose for mercury of 0.1 ug/kg/day. However, computing intake using consumption for the highest month results in average mercury intake exceeding the reference dose for striped bass and bluefish as well. These data, and the variability in consumption patterns, have implications for risk assessors, risk managers, and health professionals.
N. Alavizadeh
2017-01-01
Full Text Available ims: Apelin is an adipokine, which secreted from adipose tissue and has positive effects against the insulin resistance. The aim of this study was to investigate the effect of 8-week aerobic exercise on levels of apelin and insulin resistance index in sedentary men. Materials & Methods: In this semi-experimental study with controlled group pre/post-test design in 2015, 27 healthy sedentary men living in Mashhad City, Iran, were selected by convenience sampling method. They were divided into two groups; experimental group (n=14 and control group (n=13. In the trained group, the volunteers participated in 8 weeks aerobic exercise, 3 days/week (equivalent to 75-85% of maximum oxygen consumption for 60 minutes per session. The research variables were assessed before and after the intervention in both groups. The collected data were analyzed using SPSS 20 software using paired and independent sample T tests. Findings: 8-week aerobic exercise significantly decreased the weight, BMI and apelin, insulin and insulin resistance index levels and increased the maximum oxygen consumption in experimental group sedentary men (p<0.05. Moreover, there were significant differences in levels of FBS, insulin, apelin, insulin resistance index and maximum oxygen consumption between experimental and control groups (p<0.05. Conclusion: 8-week aerobic exercise reduces apelin levels and insulin resistance index in sedentary men.
陈嘉; 李拥军; 杨文萍
2009-01-01
Objective To study the effects of carbon disulfide exposure within the national maximum allowable concentration(MAC) on blood pressure and electrocardiogram, and associations with selected factors. Methods Workers in a chemical fiber factory were divided into two groups based on the type of work: a high exposure group (HEG) of 821 individuals and a low exposure group (LEG) of 259. The CS_2 concentration at workplace was controlled under the national MAC. A set of 250 randomly selected people taking routine phys-ical check-ups in the same period and hospital constituted the control group. The systolic blood pressure (SBP) and diastolic hlood pressure (DBP) were measured on the arm, and the pulse pressure (PP) and mean arterial blood pressure (MABP) were calculated based on SBP and DBP. The blood pressure data, along with the results of the routine 12-lead electrocardiography taken at rest and records on gender, age, years of work, type of work, and concentrations of triglycerol, cholesterol, and glucose in blood, were compiled for analyses. Risk factors upon CS_2 exposure for the increase of blood pressure and occurrence of electrocardiogram abnor-malities were identified and rationalized. Results Significant difference (P<0.01) in the average values of SBP, DBP, MABP, and the corresponding abnormality incident rates was found between HEG and LEG, and between HEG and the control group. For both HEG and LEG, the incident rate of DBP abnormality(high DBP) is nearly two times as high as that of SBP. Type of work is the largest risk factor in both the high SBP and high DBP subgroups, with odds ratios (OR) of 2.086 and 2.331 respectively, and high CS_2 exposure presents more than double the risk than low exposure. On the incident rate of ECG abnormalities, beth exposure groups are significantly different (P<0.01) to the control group. High SBP in LEG and high DBP in HEG were found to be significant risk factors (OR = 3.531 and 1.638 respectively), while blood glucose
Sander, Pia; Mouritsen, L; Andersen, J Thorup
2002-01-01
OBJECTIVE: The aim of this study was to evaluate the value of routine measurements of urinary flow rate and residual urine volume as a part of a "minimal care" assessment programme for women with urinary incontinence in detecting clinical significant bladder emptying problems. MATERIAL AND METHOD...... female urinary incontinence. Thus, primary health care providers can assess women based on simple guidelines without expensive equipment for assessment of urine flow rate and residual urine....
Strasser, Barbara; Schwarz, Joachim; Haber, Paul; Schobersberger, Wolfgang
2011-12-01
Aim of this study was to evaluate reliable guide values for heart rate (HF) and blood pressure (RR) with reference to defined sub maximum exertion considering age, gender and body mass. One hundred and eighteen healthy but non-trained subjects (38 women, 80 men) were included in the study. For interpretation, finally facts of 28 women and 59 men were used. We found gender differences for HF and RR. Further, we noted significant correlations between HF and age as well as between RR and body mass at all exercise levels. We established formulas for gender-specific calculation of reliable guide values for HF and RR on sub maximum exercise levels.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Blazevich, Anthony J; Horne, Sara; Cannavan, Dale
2008-01-01
knee extension training was performed 3 x week(-1) for 10 weeks. Maximal isometric strength (+11.2%) and RFD (measured from 0-30/50/100/200 ms, respectively; +10.5%-20.5%) increased after 10 weeks (P training mode. Peak EMG amplitude and rate of EMG rise......This study examined the effects of slow-speed resistance training involving concentric (CON, n = 10) versus eccentric (ECC, n = 11) single-joint muscle contractions on contractile rate of force development (RFD) and neuromuscular activity (EMG), and its maintenance through detraining. Isokinetic...... were not significantly altered with training or detraining. Subjects with below-median normalized RFD (RFD/MVC) at 0 weeks significantly increased RFD after 5- and 10-weeks training, which was associated with increased neuromuscular activity. Subjects who maintained their higher RFD after detraining...
Thornley, John H M; Parsons, Anthony J
2014-02-07
Treating resource allocation within plants, and between plants and associated organisms, is essential for plant, crop and ecosystem modelling. However, it is still an unresolved issue. It is also important to consider quantitatively when it is efficient and to what extent a plant can invest profitably in a mycorrhizal association. A teleonomic model is used to address these issues. A six state-variable model giving exponential growth is constructed. This represents carbon (C), nitrogen (N) and phosphorus (P) substrates with structure in shoot, root and mycorrhiza. The shoot is responsible for uptake of substrate C, the root for substrates N and P, and the mycorrhiza also for substrates N and P. A teleonomic goal, maximizing proportional growth rate, is solved analytically for the allocation fractions. Expressions allocating new dry matter to shoot, root and mycorrhiza are derived which maximize growth rate. These demonstrate several key intuitive phenomena concerning resource sharing between plant components and associated mycorrhizae. For instance, if root uptake rate for phosphorus is equal to that achievable by mycorrhiza and without detriment to root uptake rate for nitrogen, then this gives a faster growing mycorrhizal-free plant. However, if root phosphorus uptake is below that achievable by mycorrhiza, then a mycorrhizal association may be a preferred strategy. The approach offers a methodology for introducing resource sharing between species into ecosystem models. Applying teleonomy may provide a valuable short-term means of modelling allocation, avoiding the circularity of empirical models, and circumventing the complexities and uncertainties inherent in mechanistic approaches. However it is subjective and brings certain irreducible difficulties with it.
L. Ocola
2008-01-01
Full Text Available Post-disaster reconstruction management of urban areas requires timely information on the ground response microzonation to strong levels of ground shaking to minimize the rebuilt-environment vulnerability to future earthquakes. In this paper, a procedure is proposed to quantitatively estimate the severity of ground response in terms of peak ground acceleration, that is computed from macroseismic rating data, soil properties (acoustic impedance and predominant frequency of shear waves at a site. The basic mathematical relationships are derived from properties of wave propagation in a homogeneous and isotropic media. We define a Macroseismic Intensity Scale I_{MS} as the logarithm of the quantity of seismic energy that flows through a unit area normal to the direction of wave propagation in unit time. The derived constants that relate the I_{MS} scale and peak acceleration agree well with coefficients derived from a linear regression between MSK macroseismic rating and peak ground acceleration for historical earthquakes recorded at a strong motion station, at IGP's former headquarters, since 1954. The procedure was applied to 3-October-1974 Lima macroseismic intensity data at places where there was geotechnical data and predominant ground frequency information. The observed and computed peak acceleration values, at nearby sites, agree well.
Kemmler, Wolfgang; Schliffka, Rebecca; Mayhew, Jerry L; von Stengel, Simon
2010-07-01
We evaluated the effect of whole-body electromyostimulation (WB-EMS) during dynamic exercises over 14 weeks on anthropometric, physiological, and muscular parameters in postmenopausal women. Thirty women (64.5 +/- 5.5 years) with experience in physical training (>3 years) were randomly assigned either to a control group (CON, n = 15) that maintained their general training program (2 x 60 min.wk of endurance and dynamic strength exercise) or to an electromyostimulation group (WB-EMS, n = 15) that additionally performed a 20-minute WB-EMS training (2 x 20 min.10 d). Resting metabolic rate (RMR) determined from spirometry was selected to indicate muscle mass. In addition, body circumferences, subcutaneous skinfolds, strength, power, and dropout and adherence values. Resting metabolic rate was maintained in WB-EMS (-0.1 +/- 4.8 kcal.h) and decreased in CON (-3.2+/-5.2 kcal.h, p = 0.038); although group differences were not significant (p = 0.095), there was a moderately strong effect size (ES = 0.62). Sum of skinfolds (28.6%) and waist circumference (22.3%) significantly decreased in WB-EMS whereas both parameters (1.4 and 0.1%, respectively) increased in CON (p = 0.001, ES = 1.37 and 1.64, respectively), whereas both parameters increased in CON (1.4 and 0.1%, respectively). Isometric strength changes of the trunk extensors and leg extensors differed significantly (p < or = 0.006) between WB-EMS and CON (9.9% vs. -6.4%, ES = 1.53; 9.6% vs. -4.5%, ES = 1.43, respectively). In summary, adjunct WB-EMS training significantly exceeds the effect of isolated endurance and resistance type exercise on fitness and fatness parameters. Further, we conclude that for elderly subjects unable or unwilling to perform dynamic strength exercises, electromyostimulation may be a smooth alternative to maintain lean body mass, strength, and power.
Eduardo Marcel Fernandes Nascimento
2011-08-01
Full Text Available The objective of this study was to analyze the heart rate (HR profile plotted against incremental workloads (IWL during a treadmill test using three mathematical models [linear, linear with 2 segments (Lin2, and sigmoidal], and to determine the best model for the identification of the HR threshold that could be used as a predictor of ventilatory thresholds (VT1 and VT2. Twenty-two men underwent a treadmill incremental test (retest group: n=12 at an initial speed of 5.5 km.h-1, with increments of 0.5 km.h-1 at 1-min intervals until exhaustion. HR and gas exchange were continuously measured and subsequently converted to 5-s and 20-s averages, respectively. The best model was chosen based on residual sum of squares and mean square error. The HR/IWL ratio was better fitted with the Lin2 model in the test and retest groups (p0.05. During a treadmill incremental test, the HR/IWL ratio seems to be better fitted with a Lin2 model, which permits to determine the HR threshold that coincides with VT1.
Gomez-Paccard, Miriam; Osete, Maria Luisa; Chauvin, Annick; Pérez-Asensio, Manuel; Jimenez-Castillo, Pedro
2014-05-01
Available European data indicate that during the past 2500 years there have been periods of rapid intensity geomagnetic fluctuations interspersed with periods of little change. The challenge now is to precisely describe these rapid changes. Due to the difficulty to obtain precisely dated heated materials to obtain a high-resolution description of past geomagnetic field intensity changes, new high-quality archeomagnetic data from archeological heated materials founded in well-defined superposed stratigraphic units are particularly valuable. In this work we report the archeomagnetic study of several groups of ceramic fragments from southeastern Spain that belong to 14 superposed stratigraphic levels corresponding to a surface no bigger than 3 m by 7 m. Between four and eight ceramic fragments were selected per stratigraphic unit. The age of the pottery fragments range from the second half of the 7th to the11th centuries. The dates were established by three radiocarbon dates and by archeological/historical constraints including typological comparisons and well-controlled stratigraphic constrains.Between two and four specimens per pottery fragment were studied. The classical Thellier and Thellier method including pTRM checks and TRM anisotropy and cooling rate corrections was used to estimate paleointensities at specimen level. All accepted results correspond to well-defined single components of magnetization going toward the origin and to high-quality paleointensity determinations. From these experiments nine new high-quality mean intensities have been obtained. The new data provide an improved description of the sharp abrupt intensity changes that took place in this region between the 7th and the 11th centuries. The results confirm that several rapid intensity changes (of about ~15-20 µT/century) took place in Western Europe during the recent history of the Earth.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
42 CFR 50.504 - Allowable cost of drugs.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Allowable cost of drugs. 50.504 Section 50.504... APPLICABILITY Maximum Allowable Cost for Drugs § 50.504 Allowable cost of drugs. (a) The maximum amount which may be expended from program funds for the acquisition of any drug shall be the lowest of (1)...
邓春亮; 胡南辉
2012-01-01
在非自然联系情形下讨论了广义线性模型拟似然方程的解βn在λn→∞和其他一些正则性条件下证明了解的弱相合性，并得到其收敛于真值βo的速度为Op（λn^-1/2），其中λn（λ^-n）为方阵Sn=n∑i=1XiX^11的最小（最大）特征值．%In this paper,we study the solution βn of quasi - maximum likelihood equation for generalized linear mod- els （GLMs）. Under the assumption of an unnatural link function and other some mild conditions , we prove the weak consistency of the solution to βnquasi - - maximum likelihood equation and present its convergence rate isOp（λn^-1/2）,λn（^λn） which denotes the smallest （Maximum）eigervalue of the matrixSn =n∑i=1XiX^11,
Assessing allowable take of migratory birds
Runge, M.C.; Sauer, J.R.; Avery, M.L.; Blackwell, B.F.; Koneff, M.D.
2009-01-01
Legal removal of migratory birds from the wild occurs for several reasons, including subsistence, sport harvest, damage control, and the pet trade. We argue that harvest theory provides the basis for assessing the impact of authorized take, advance a simplified rendering of harvest theory known as potential biological removal as a useful starting point for assessing take, and demonstrate this approach with a case study of depredation control of black vultures (Coragyps atratus) in Virginia, USA. Based on data from the North American Breeding Bird Survey and other sources, we estimated that the black vulture population in Virginia was 91,190 (95% credible interval = 44,520?212,100) in 2006. Using a simple population model and available estimates of life-history parameters, we estimated the intrinsic rate of growth (rmax) to be in the range 7?14%, with 10.6% a plausible point estimate. For a take program to seek an equilibrium population size on the conservative side of the yield curve, the rate of take needs to be less than that which achieves a maximum sustained yield (0.5 x rmax). Based on the point estimate for rmax and using the lower 60% credible interval for population size to account for uncertainty, these conditions would be met if the take of black vultures in Virginia in 2006 was <3,533 birds. Based on regular monitoring data, allowable harvest should be adjusted annually to reflect changes in population size. To initiate discussion about how this assessment framework could be related to the laws and regulations that govern authorization of such take, we suggest that the Migratory Bird Treaty Act requires only that take of native migratory birds be sustainable in the long-term, that is, sustained harvest rate should be
房祥忠; 陈家鼎
2011-01-01
强度随时间变化的非齐次Possion过程在很多领域应用广泛.对一类非常广泛的非齐次Poisson过程—指数多项式模型,得到了当观测时间趋于无穷大时,参数的最大似然估计的“最优”收敛速度.%The model of nonhomogeneous Poisson processes with varying intensity function is applied in many fields. The best convergence rate for the maximum likelihood estimate ( MLE ) of exponential polynomial model, which is a kind of wide used nonhomogeneous Poisson processes, is given when time going to infinity.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Simone Bittencourt
2006-06-01
Full Text Available Para implementação e operacionalização da política brasileira de recursos hídricos, é imprescindível o uso de ferramentas de planejamento que considerem o efeito de todas as atividades ou processos que causam ou contribuem para a degradação da qualidade de um corpo d'água. Neste sentido, aplicou-se o processo TMDL (total maximum daily load, desenvolvido pela Agência de Proteção Ambiental dos Estados Unidos (EPA, para o P, na área de drenagem de contribuição ao futuro reservatório Piraquara II, bacia hidrográfica do rio Piraquara, Paraná. O processo TMDL determina a quantidade máxima de cargas de um poluente que um corpo d'água pode receber sem violar os padrões estabelecidos de qualidade da água e aloca cargas deste poluente entre fontes de poluição pontuais e difusas. No presente estudo, utilizou-se o método TMDL, com o objetivo de demonstrar ser ele uma ferramenta útil no processo de gestão dos recursos hídricos. Simularam-se cenários de uso do solo, por meio de modelagem matemática, até obter-se uma concentração de P total no reservatório abaixo da faixa limite para ocorrência de eutrofização, de 0,025 a 0,10 mg L-1, estabelecida no estudo. Realizou-se uma simulação de uso atual do solo, visando prever a condição inicial de qualidade da água no corpo d'água, na qual a concentração de P total no reservatório resultante não atendeu ao padrão estabelecido. Procedeu-se a uma segunda simulação com adoção das medidas de controle, recomposição de mata ciliar e plantio direto, para reduzir a exportação de carga de P total da bacia. Obteve-se uma melhoria na qualidade da água do reservatório, indicando que as medidas adotadas foram suficientes para atingir o padrão estabelecido, o que demonstra a aplicabilidade do método.For the implementation and operation of the Brazilian Federal law on water resources of 1997 it is indispensable to use planning tools that take into account the effect of
Extracting volatility signal using maximum a posteriori estimation
Neto, David
2016-11-01
This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
Valter Abrantes Pereira da Silva
2007-03-01
Full Text Available OBJETIVO: O presente estudo objetivou comparar os valores de freqüência cardíaca máxima (FCmáx medidos durante um teste de esforço progressivo (TEP, com os obtidos através de equações de predição, em idosas brasileiras. MÉTODOS: Um TEP máximo sob o protocolo modificado de Bruce, realizado em esteira, foi utilizado para obtenção dos valores de referência da freqüência cardíaca máxima (FCmáx, em 93 mulheres idosas (67,1±5,16 anos. Os valores obtidos foram comparados aos estimados pelas equações "220 - idade" e a de Tanaka e cols., através da ANOVA, para amostras repetidas. A correlação e a concordância entre os valores medidos e os estimados foram testadas. Adicionalmente, a correlação entre os valores de FCmáx medidos e a idade das voluntárias foi examinada. RESULTADOS: Os resultados foram os seguintes: 1 a média da FCmáx atingida no TEP foi de 145,5±12,5 batimentos por minuto (bpm; 2 as equações "220 - idade" e a de Tanaka e cols. (2001 superestimaram significativamente (p OBJECTIVE: This study sought to compare maximum heart rate (HRmax values measured during a graded exercise test (GXT with those calculated from prediction equations in Brazilian elderly women. METHODS: A treadmill maximal graded exercise test in accordance with the modified Bruce protocol was used to obtain reference values for maximum heart rate (HRmax in 93 elderly women (mean age 67.1 ± 5.16. Measured values were compared with those estimated from the "220 - age" and Tanaka et al formulas using repeated-measures ANOVA. Correlation and agreement between measured and estimated values were tested. Also evaluated was the correlation between measured HRmax and volunteers’ age. RESULTS: Results were as follows: 1 mean HRmax reached during GXT was 145.5 ± 12,5 beats per minute (bpm; 2 both the "220 - age" and Tanaka et al (2001 equations significantly overestimated (p < 0.001 HRmax by a mean difference of 7.4 and 15.5 bpm, respectively; 3
Yuan-Hong Jiang
Full Text Available OBJECTIVES: The aim of this study was to investigate the predictive values of the total International Prostate Symptom Score (IPSS-T and voiding to storage subscore ratio (IPSS-V/S in association with total prostate volume (TPV and maximum urinary flow rate (Qmax in the diagnosis of bladder outlet-related lower urinary tract dysfunction (LUTD in men with lower urinary tract symptoms (LUTS. METHODS: A total of 298 men with LUTS were enrolled. Video-urodynamic studies were used to determine the causes of LUTS. Differences in IPSS-T, IPSS-V/S ratio, TPV and Qmax between patients with bladder outlet-related LUTD and bladder-related LUTD were analyzed. The positive and negative predictive values (PPV and NPV for bladder outlet-related LUTD were calculated using these parameters. RESULTS: Of the 298 men, bladder outlet-related LUTD was diagnosed in 167 (56%. We found that IPSS-V/S ratio was significantly higher among those patients with bladder outlet-related LUTD than patients with bladder-related LUTD (2.28±2.25 vs. 0.90±0.88, p1 or >2 was factored into the equation instead of IPSS-T, PPV were 91.4% and 97.3%, respectively, and NPV were 54.8% and 49.8%, respectively. CONCLUSIONS: Combination of IPSS-T with TPV and Qmax increases the PPV of bladder outlet-related LUTD. Furthermore, including IPSS-V/S>1 or >2 into the equation results in a higher PPV than IPSS-T. IPSS-V/S>1 is a stronger predictor of bladder outlet-related LUTD than IPSS-T.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Allowable levels of take for the trade in Nearctic songbirds
Johnson, Fred A.; Walters, Matthew A.H.; Boomer, G. Scott
2012-01-01
The take of Nearctic songbirds for the caged-bird trade is an important cultural and economic activity in Mexico, but its sustainability has been questioned. We relied on the theta-logistic population model to explore options for setting allowable levels of take for 11 species of passerines that were subject to legal take in Mexico in 2010. Because estimates of population size necessary for making periodic adjustments to levels of take are not routinely available, we examined the conditions under which a constant level of take might contribute to population depletion (i.e., a population below its level of maximum net productivity). The chance of depleting a population is highest when levels of take are based on population sizes that happen to be much lower or higher than the level of maximum net productivity, when environmental variation is relatively high and serially correlated, and when the interval between estimation of population size is relatively long (≥5 years). To estimate demographic rates of songbirds involved in the Mexican trade we relied on published information and allometric relationships to develop probability distributions for key rates, and then sampled from those distributions to characterize the uncertainty in potential levels of take. Estimates of the intrinsic rate of growth (r) were highly variable, but median estimates were consistent with those expected for relatively short-lived, highly fecund species. Allowing for the possibility of nonlinear density dependence generally resulted in allowable levels of take that were lower than would have been the case under an assumption of linearity. Levels of take authorized by the Mexican government in 2010 for the 11 species we examined were small in comparison to relatively conservative allowable levels of take (i.e., those intended to achieve 50% of maximum sustainable yield). However, the actual levels of take in Mexico are unknown and almost certainly exceed the authorized take. Also, the take
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Training Concept, Evolution Time, and the Maximum Entropy Production Principle
Alexey Bezryadin
2016-04-01
Full Text Available The maximum entropy production principle (MEPP is a type of entropy optimization which demands that complex non-equilibrium systems should organize such that the rate of the entropy production is maximized. Our take on this principle is that to prove or disprove the validity of the MEPP and to test the scope of its applicability, it is necessary to conduct experiments in which the entropy produced per unit time is measured with a high precision. Thus we study electric-field-induced self-assembly in suspensions of carbon nanotubes and realize precise measurements of the entropy production rate (EPR. As a strong voltage is applied the suspended nanotubes merge together into a conducting cloud which produces Joule heat and, correspondingly, produces entropy. We introduce two types of EPR, which have qualitatively different significance: global EPR (g-EPR and the entropy production rate of the dissipative cloud itself (DC-EPR. The following results are obtained: (1 As the system reaches the maximum of the DC-EPR, it becomes stable because the applied voltage acts as a stabilizing thermodynamic potential; (2 We discover metastable states characterized by high, near-maximum values of the DC-EPR. Under certain conditions, such efficient entropy-producing regimes can only be achieved if the system is allowed to initially evolve under mildly non-equilibrium conditions, namely at a reduced voltage; (3 Without such a “training” period the system typically is not able to reach the allowed maximum of the DC-EPR if the bias is high; (4 We observe that the DC-EPR maximum is achieved within a time, Te, the evolution time, which scales as a power-law function of the applied voltage; (5 Finally, we present a clear example in which the g-EPR theoretical maximum can never be achieved. Yet, under a wide range of conditions, the system can self-organize and achieve a dissipative regime in which the DC-EPR equals its theoretical maximum.
Quantum theory allows for absolute maximal contextuality
Amaral, Barbara; Cunha, Marcelo Terra; Cabello, Adán
2015-12-01
Contextuality is a fundamental feature of quantum theory and a necessary resource for quantum computation and communication. It is therefore important to investigate how large contextuality can be in quantum theory. Linear contextuality witnesses can be expressed as a sum S of n probabilities, and the independence number α and the Tsirelson-like number ϑ of the corresponding exclusivity graph are, respectively, the maximum of S for noncontextual theories and for the theory under consideration. A theory allows for absolute maximal contextuality if it has scenarios in which ϑ /α approaches n . Here we show that quantum theory allows for absolute maximal contextuality despite what is suggested by the examination of the quantum violations of Bell and noncontextuality inequalities considered in the past. Our proof is not constructive and does not single out explicit scenarios. Nevertheless, we identify scenarios in which quantum theory allows for almost-absolute-maximal contextuality.
徐革锋; 尹家胜; 韩英; 刘洋; 牟振波
2014-01-01
This study examined the effects of water temperature on the metabolic characteristics and aerobic exer-cise capacity of juvenile manchurian trout , Brachymystax lenok ( Pallas) .The resting metabolic rate ( RMR) ,maxi-mum metabolic rate (MMR), metabolic scope(MS)and critical swimming speed (UCrit) of juveniles were measured at different temperature (4, 8, 12, 16, 20℃).The results showed that both the RMR and the MMR increased sig-nificantly with the increasing of water temperature ( P<0 .05 ) .Compared with test group at 4℃, the RMR for 8℃, 12℃, 16℃ and 20℃increased by 62%, 165%, 390%, 411%,respectively, and the MMR increased by 3%, 34%, 111%, 115%, respectively .However , the MS decreased with the increasing of water temperature with the highest MS occurring at 4℃.UCrit was significantly affected by water temperature (P<0.05), but the varia-tions of UCrit didn′t follow certain pattern with temperature .In the test of aerobic exercise , the MMR for each tem-perature level occurred at the swimming speed of 70% UCrit , probably due to the start of anaerobic metabolism , which caused excessive creatine in body , consequently hindered the aerobic metabolism .%为了探究温度对细鳞鲑（ Brachymystax lenok）幼鱼的代谢特征和有氧运动能力的影响，在不同温度（4℃、8℃、12℃、16℃、20℃）下测定了实验鱼的静止代谢率（ RMR）、有氧运动过程中的最大代谢率（ MMR）以及能量代谢范围（MS）和临界游泳速度（UCrit）。结果表明，随着温度的上升，RMR和MMR均显著提高（P＜0．05），各温度下的RMR和MMR分别较4℃条件的提高了62％（8℃）、165％（12℃）、390％（16℃）、411％（20℃）和3％（8℃）、34％（12℃）、111％（16℃）、115％（20℃）；MS随水温的升高呈现下降的趋势，且4℃条件具有最大的代谢范围。不同温度条件下UCrit存在显著性差异，但随着温度升高未表现出明显的变
40 CFR 35.2025 - Allowance and advance of allowance.
2010-07-01
... facilities planning and design of the project and Step 7 agreements will include an allowance for facility planning in accordance with appendix B of this subpart. (b) Advance of allowance to potential grant... grant applicants for facilities planning and project design. (2) The State may request that the right to...
2010-10-01
...) of this section, additional design features, such as mechanical or composite crack arrestors and/or... surface of the plate/coil or pipe to identify imperfections that impair serviceability such as laminations... must be a hardness test, using Vickers (Hv10) hardness test method or equivalent test method, to...
49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.
2010-10-01
... integrity of the coating using direct current voltage gradient (DCVG) or alternating current voltage... pipeline segment. (ii) To address interference currents, perform the following: (A) Conduct an interference survey to detect the presence and level of any electrical current that could impact external...
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Alixon David Reyes Rodríguez
2011-06-01
theoretical points of reference that responded to scientific needs before, but which are insufficient now. It has been observed in national and international conferences, seminaries, research encounters, in our universities and in different kinds of scientific meetings that some obsolete assumptions are still being taught, which slows down progress in Education Sciences and Sports Science. We recognize that some predictive formulas used to calculate the estimated maximum heart rate (EMHR represented progress for Exercise Science and Exercise Physiology, at some point; however, there are important aspects that should be considered. It is not that we despise them, but we intend to demonstrate and demystify the use of the traditional formula almost as the only calculation and measurement pattern for EMHR and, to offer, from the perspective of other researchers, better possibilities of exercise dosage for certain populations with particular characteristics.
Maximum hydrogen production from genetically modified microalgae biomass
Vargas, Jose; Kava, Vanessa; Ordonez, Juan
A transient mathematical model for managing microalgae derived H2 production as a source of renewable energy is developed for a well stirred photobioreactor, PBR. The model allows for the determination of microalgae and H2 mass fractions produced by the PBR in time. A Michaelis-Menten expression is proposed for modeling the rate of H2 production, which introduces an expression to calculate the resulting effect on H2 production rate after genetically modifying the microalgae. The indirect biophotolysis process was used. Therefore, an opportunity was found to optimize the aerobic to anaerobic stages time ratio of the cycle for maximum H2 production rate, i.e., the process rhythm. A system thermodynamic optimization is conducted with the model equations to find accurately the optimal system operating rhythm for maximum H2 production rate, and how wild and genetically modified species compare to each other. The maxima found are sharp, showing up to a ~60% variation in hydrogen production rate within 2 days around the optimal rhythm, which highlights the importance of system operation in such condition. Therefore, the model is expected to be useful for design, control and optimization of H2 production. Brazilian National Council of Scientific and Technological Development, CNPq (project 482336/2012-9).
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
张戈
2015-01-01
We studies the issue raised by Reference[3],according to appropriate assumptions and other smooth conditions,With a more simple method,Proved that asymptotic existence of quasi likelihood equations in Quasi-likelihood nonlinear model ,and proved the convergence rate of the solution.%在适当假定及其它一些光滑条件下,用更为简便的方法证明了拟似然非线性模型的拟似然方程解的渐近存在性,并且求出了该解收敛于真值的速度.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vietnam recommended dietary allowances 2007.
Khan, Nguyen Cong; Hoan, Pham Van
2008-01-01
It has been well acknowledged that Vietnam is undergoing a nutrition transition. With a rapid change in the country's reform and economic growth, food supply at the macronutrient level has improved. Changes of the Vietnamese diet include significantly more foods of animal origin, and an increase of fat/oils, and ripe fruits. Consequently, nutritional problems in Vietnam now include not only malnutrition but also overweight/obesity, metabolic syndrome and other chronic diseases related to nutrition and lifestyles. The recognition of these shifts, which is also associated with morbidity and mortality, was a major factor in the need to review and update the Recommended Dietary Allowances (RDA) for the Vietnamese population. This revised RDA established an important science-based tool for evaluation of nutrition adequacy, for teaching, and for scientific communications within Vietnam. It is expected that the 2007 Vietnam RDA and its conversion to food-based dietary guidelines will facilitate education to the public, as well as the policy implementation of programs for prevention of non-communicable chronic diseases and addressing the double burden of both under and over nutrition.
Gervais, V.
2004-11-01
The subject of this report is the study and simulation of a model describing the infill of sedimentary basins on large scales in time and space. It simulates the evolution through time of the sediment layer in terms of geometry and rock properties. A parabolic equation is coupled to an hyperbolic equation by an input boundary condition at the top of the basin. The model also considers a unilaterality constraint on the erosion rate. In the first part of the report, the mathematical model is described and particular solutions are defined. The second part deals with the definition of numerical schemes and the simulation of the model. In the first chap-ter, finite volume numerical schemes are defined and studied. The Newton algorithm adapted to the unilateral constraint used to solve the schemes is given, followed by numerical results in terms of performance and accuracy. In the second chapter, a preconditioning strategy to solve the linear system by an iterative solver at each Newton iteration is defined, and numerical results are given. In the last part, a simplified model is considered in which a variable is decoupled from the other unknowns and satisfies a parabolic equation. A weak formulation is defined for the remaining coupled equations, for which the existence of a unique solution is obtained. The proof uses the convergence of a numerical scheme. (author)
Multiresolution maximum intensity volume rendering by morphological adjunction pyramids
Roerdink, Jos B.T.M.
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Multiresolution Maximum Intensity Volume Rendering by Morphological Adjunction Pyramids
Roerdink, Jos B.T.M.
2001-01-01
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Calculation of Maximum Waste Heat and Recovery Rate of Liquid and Gas Fuels%液气燃料烟气的最大余热量与节能率计算研究
丛永杰
2016-01-01
The consumption of various liqui d oil and gas fuel grows rapidly in Chinese energy structure. The discharged flue's temperature is generally 160℃ ~180℃ after these fuels are combusted. This part of energy can be used as secondary energy, though whose grade is low. A lot of H elements are in the form of liquid and gas fuels, and the vapor is the flue's main ingredi-ents. In this paper, the waste heat quantity and recovery rate of 0# light diesel oil and natural gas's flue is calculated, whose tem-perature is from 180℃ to 25℃ at the condition of 1 atm. In the 0# light diesel's flue, the residual heat's proportion of the vapor's heat is about 55. 08%. In the natural gas's flue, which proportion is about 79. 41%. Moreover, the vapor's latent heat is about 3/4. Therefore, recovering the latent heat of vapor is of great significance for the heat recovery of the low temperature waste heat.%在中国能源结构中,燃油与天然气所占比例迅速上升.燃烧后排烟温度一般为160℃~180℃,仍含有较多能量,可以二次利用.本文通过对液、气体燃料中具有代表性的0号轻质柴油及天然气烟气的余热量与节能率进行计算,发现低温烟气余热中的水蒸气余热量占有很大比例,柴油烟气为55.08%,天热气烟气为79.41%.回收烟气余热,尤其是其中水蒸汽的潜热对低温烟气的热回收具有重要意义.若有效回收利用,既是对一次能源的二次利用,更符合"十三五"期间国家节能减排的相关政策要求.
A finite horizon production model with variable production rates and constant demand rate
2002-01-01
In this paper we present a finite horizon single product single machine production problem. Demand rate and all the cost patterns do not change over time. However, end of horizon effects may require production rate adjustments at the beginning of each cycle. It is found that no such adjustments are required. The machine should be operated either at minimum speed (i.e. production rate = demand rate; shortage is not allowed), avoiding the buildup of any inventory, or at maximum s...
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
24 CFR 891.440 - Adjustment of utility allowances.
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Adjustment of utility allowances... Project Management § 891.440 Adjustment of utility allowances. This section shall apply to projects funded... submit an analysis of any utility allowances applicable. Such data as changes in utility rates and...
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
44 CFR 208.12 - Maximum Pay Rate Table.
2010-10-01
... reimbursement and Backfill, for the System Member's actual compensation or the actual compensation of the... OF HOMELAND SECURITY DISASTER ASSISTANCE NATIONAL URBAN SEARCH AND RESCUE RESPONSE SYSTEM General...
Entrainment and maximum vapour flow rate of trays
Van Sinderen, AH; Wijn, EF; Zanting, RWJ
This is a report on free entrainment measurements in a small (0.20 m x 0.20 in) air-water column. An adjustable weir controlled the liquid height on a test tray. Several sieve and valve trays were studied. The results were interpreted with a two- or three-layer model of the two-phase mixture on the
MAXIMUM PRODUCTION OF TRANSMISSION MESSAGES RATE FOR SERVICE DISCOVERY PROTOCOLS
Intisar Al-Mejibli
2011-12-01
Full Text Available Minimizing the number of dropped User Datagram Protocol (UDP messages in a network is regarded asa challenge by researchers. This issue represents serious problems for many protocols particularly thosethat depend on sending messages as part of their strategy, such us service discovery protocols.This paper proposes and evaluates an algorithm to predict the minimum period of time required betweentwo or more consecutive messages and suggests the minimum queue sizes for the routers, to manage thetraffic and minimise the number of dropped messages that has been caused by either congestion or queueoverflow or both together. The algorithm has been applied to the Universal Plug and Play (UPnPprotocol using ns2 simulator. It was tested when the routers were connected in two configurations; as acentralized and de centralized. The message length and bandwidth of the links among the routers weretaken in the consideration. The result shows Better improvement in number of dropped messages `amongthe routers.
veteran athletes exercise at higher maximum heart rates than are ...
questionnaire, a full medical examination and a routine. sECG. Thereafter ... activities than during stress testing in the laboratory. (P < 0.01). ... After the risks and procedures involved ..... for the first time in either rehabilitation or sporting activities. .... set were i. Results. E. 25 - 29.9), underwei increased. ;;, 24-year- pressure,.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Mechanical Sun-Tracking Technique Implemented for Maximum ...
The solar panel is allowed to move from east to west and back forth with a maximum allowable angle of 180o. Its movement is in only one axis. The prototype built carries the panel from eastward to westward tracking the sun movement from ...
Allowable carbon emissions for medium-to-high mitigation scenarios
Tachiiri, Kaoru; Hargreaves, Julia C.; Annan, James D.; Kawamiya, Michio [Research Inst. for Global Change, Japan Agency for Marine-Earth Science and Technology, Yokohama, (Japan)], e-mail: tachiiri@jamstec.go.jp; Huntingford, Chris [Centre for Ecology and Hydrology, Wallingford (United Kingdom)
2013-11-15
source to the atmosphere, although uncertainties on this are large. The parameters which most significantly affect the allowable emissions are aerosols and climate sensitivity, but some carbon-cycle related parameters (e.g. maximum photosynthetic rate and respiration's temperature dependency of vegetation) also have significant effects. Parameter values are constrained by observation, and we found that the CO{sub 2} emission data had a significant effect in constraining climate sensitivity and the magnitude of aerosol radiative forcing.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
13 CFR 120.213 - What fixed interest rates may a Lender charge?
2010-01-01
... Lender charge? 120.213 Section 120.213 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION... have a reasonable fixed interest rate. SBA periodically publishes the maximum allowable rate in the... government determines the interest rate on direct loans. SBA publishes the rate periodically in the...
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... rate effective on the date the supplemental annuity begins, before any reduction for a private pension... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52...
Sun, Jindong; Feng, Zhaozhong; Leakey, Andrew D B; Zhu, Xinguang; Bernacchi, Carl J; Ort, Donald R
2014-09-01
The responses of CO2 assimilation to [CO2] (A/Ci) were investigated at two developmental stages (R5 and R6) and in several soybean cultivars grown under two levels of CO2, the ambient level of 370 μbar versus the elevated level of 550 μbar. The A/Ci data were analyzed and compared by either the combined iterations or the separated iterations of the Rubisco-limited photosynthesis (Ac) and/or the RuBP-limited photosynthesis (Aj) using various curve-fitting methods: the linear 2-segment model; the non-rectangular hyperbola model; the rectangular hyperbola model; the constant rate of electron transport (J) method and the variable J method. Inconsistency was found among the various methods for the estimation of the maximum rate of carboxylation (Vcmax), the mitochondrial respiration rate in the light (Rd) and mesophyll conductance (gm). The analysis showed that the inconsistency was due to inconsistent estimates of gm values that decreased with an instantaneous increase in [CO2], and varied with the transition Ci cut-off between Rubisco-limited photosynthesis and RuBP-regeneration-limited photosynthesis, and due to over-parameters for non-linear curve-fitting with gm included. We proposed an alternate solution to A/Ci curve-fitting for estimates of Vcmax, Rd, Jmax and gm with the various A/Ci curve-fitting methods. The study indicated that down-regulation of photosynthetic capacity by elevated [CO2] and leaf aging was due to partially the decrease in the maximum rates of carboxylation and partially the decrease in gm. Mesophyll conductance lowered photosynthetic capacity by 18% on average for the case of soybean plants.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
46 CFR 154.421 - Allowable stress.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Allowable stress. 154.421 Section 154.421 Shipping COAST... § 154.421 Allowable stress. The allowable stress for the integral tank structure must meet the American Bureau of Shipping's allowable stress for the vessel's hull published in “Rules for Building and Classing...
46 CFR 154.440 - Allowable stress.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Allowable stress. 154.440 Section 154.440 Shipping COAST... Tank Type A § 154.440 Allowable stress. (a) The allowable stresses for an independent tank type A must... Commandant (CG-522). (b) A greater allowable stress than required in paragraph (a)(1) of this section may be...
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
38 CFR 21.9670 - Work-study allowance.
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Work-study allowance. 21... rate of pursuit of at least 75 percent may receive a work-study allowance in accordance with the...) VOCATIONAL REHABILITATION AND EDUCATION Post-9/11 GI Bill Payments-Educational Assistance § 21.9670...
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Modeling the Mass Action Dynamics of Metabolism with Fluctuation Theorems and Maximum Entropy
Cannon, William; Thomas, Dennis; Baxter, Douglas; Zucker, Jeremy; Goh, Garrett
The laws of thermodynamics dictate the behavior of biotic and abiotic systems. Simulation methods based on statistical thermodynamics can provide a fundamental understanding of how biological systems function and are coupled to their environment. While mass action kinetic simulations are based on solving ordinary differential equations using rate parameters, analogous thermodynamic simulations of mass action dynamics are based on modeling states using chemical potentials. The latter have the advantage that standard free energies of formation/reaction and metabolite levels are much easier to determine than rate parameters, allowing one to model across a large range of scales. Bridging theory and experiment, statistical thermodynamics simulations allow us to both predict activities of metabolites and enzymes and use experimental measurements of metabolites and proteins as input data. Even if metabolite levels are not available experimentally, it is shown that a maximum entropy assumption is quite reasonable and in many cases results in both the most energetically efficient process and the highest material flux.
Clean Air Markets - Allowances Query Wizard
U.S. Environmental Protection Agency — The Allowances Query Wizard is part of a suite of Clean Air Markets-related tools that are accessible at http://camddataandmaps.epa.gov/gdm/index.cfm. The Allowances...
Allowance Holdings and Transfers Data Inventory
U.S. Environmental Protection Agency — The Allowance Holdings and Transfers Data Inventory contains measured data on holdings and transactions of allowances under the NOx Budget Trading Program (NBP), a...
46 CFR 154.428 - Allowable stress.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Allowable stress. 154.428 Section 154.428 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS FOR... § 154.428 Allowable stress. The membrane tank and the supporting insulation must have allowable stresses...
46 CFR 154.447 - Allowable stress.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Allowable stress. 154.447 Section 154.447 Shipping COAST... Tank Type B § 154.447 Allowable stress. (a) An independent tank type B designed from bodies of revolution must have allowable stresses 3 determined by the following formulae: 3 See Appendix B for stress...
32 CFR 584.7 - Basic allowance for quarters.
2010-07-01
... CUSTODY, AND PATERNITY § 584.7 Basic allowance for quarters. (a) Eligibility. (1) Soldiers entitled to...-dependents rate each month in support of their families. (See DODPM, part 3.) This is so even if a divorce... spouse or stepchildren after the divorce. BAQ at the “with dependents” rate is not authorized when...
Estimating Metabolic Fluxes Using a Maximum Network Flexibility Paradigm
Megchelenbrink, Wout; Rossell, Sergio; Huynen, Martijn A.
2015-01-01
Motivation Genome-scale metabolic networks can be modeled in a constraint-based fashion. Reaction stoichiometry combined with flux capacity constraints determine the space of allowable reaction rates. This space is often large and a central challenge in metabolic modeling is finding the biologically most relevant flux distributions. A widely used method is flux balance analysis (FBA), which optimizes a biologically relevant objective such as growth or ATP production. Although FBA has proven to be highly useful for predicting growth and byproduct secretion, it cannot predict the intracellular fluxes under all environmental conditions. Therefore, alternative strategies have been developed to select flux distributions that are in agreement with experimental “omics” data, or by incorporating experimental flux measurements. The latter, unfortunately can only be applied to a limited set of reactions and is currently not feasible at the genome-scale. On the other hand, it has been observed that micro-organisms favor a suboptimal growth rate, possibly in exchange for a more “flexible” metabolic network. Instead of dedicating the internal network state to an optimal growth rate in one condition, a suboptimal growth rate is used, that allows for an easier switch to other nutrient sources. A small decrease in growth rate is exchanged for a relatively large gain in metabolic capability to adapt to changing environmental conditions. Results Here, we propose Maximum Metabolic Flexibility (MMF) a computational method that utilizes this observation to find the most probable intracellular flux distributions. By mapping measured flux data from central metabolism to the genome-scale models of Escherichia coli and Saccharomyces cerevisiae we show that i) indeed, most of the measured fluxes agree with a high adaptability of the network, ii) this result can be used to further reduce the space of feasible solutions iii) this reduced space improves the quantitative predictions
Estimating Metabolic Fluxes Using a Maximum Network Flexibility Paradigm.
Wout Megchelenbrink
Full Text Available Genome-scale metabolic networks can be modeled in a constraint-based fashion. Reaction stoichiometry combined with flux capacity constraints determine the space of allowable reaction rates. This space is often large and a central challenge in metabolic modeling is finding the biologically most relevant flux distributions. A widely used method is flux balance analysis (FBA, which optimizes a biologically relevant objective such as growth or ATP production. Although FBA has proven to be highly useful for predicting growth and byproduct secretion, it cannot predict the intracellular fluxes under all environmental conditions. Therefore, alternative strategies have been developed to select flux distributions that are in agreement with experimental "omics" data, or by incorporating experimental flux measurements. The latter, unfortunately can only be applied to a limited set of reactions and is currently not feasible at the genome-scale. On the other hand, it has been observed that micro-organisms favor a suboptimal growth rate, possibly in exchange for a more "flexible" metabolic network. Instead of dedicating the internal network state to an optimal growth rate in one condition, a suboptimal growth rate is used, that allows for an easier switch to other nutrient sources. A small decrease in growth rate is exchanged for a relatively large gain in metabolic capability to adapt to changing environmental conditions.Here, we propose Maximum Metabolic Flexibility (MMF a computational method that utilizes this observation to find the most probable intracellular flux distributions. By mapping measured flux data from central metabolism to the genome-scale models of Escherichia coli and Saccharomyces cerevisiae we show that i indeed, most of the measured fluxes agree with a high adaptability of the network, ii this result can be used to further reduce the space of feasible solutions iii this reduced space improves the quantitative predictions made by FBA and
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
Finding All Allowed Edges in a Bipartite Graph
Tassa, Tamir
2011-01-01
We consider the problem of finding all allowed edges in a bipartite graph $G=(V,E)$, i.e., all edges that are included in some maximum matching. We show that given any maximum matching in the graph, it is possible to perform this computation in linear time $O(n+m)$ (where $n=|V|$ and $m=|E|$). Hence, the time complexity of finding all allowed edges reduces to that of finding a single maximum matching, which is $O(n^{1/2}m)$ [Hopcroft and Karp 1973], or $O((n/\\log n)^{1/2}m)$ for dense graphs with $m=\\Theta(n^2)$ [Alt et al. 1991]. This time complexity improves upon that of the best known algorithms for the problem, which is $O(nm)$ ([Costa 1994] for bipartite graphs, and [Carvalho and Cheriyan 2005] for general graphs). Other algorithms for solving that problem are randomized algorithms due to [Rabin and Vazirani 1989] and [Cheriyan 1997], the runtime of which is $\\tilde{O}(n^{2.376})$. Our algorithm, apart from being deterministic, improves upon that time complexity for bipartite graphs when $m=O(n^r)$ and $...
The constraint rule of the maximum entropy principle
Uffink, J.
2001-01-01
The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distribut
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
20 CFR 617.47 - Moving allowance.
2010-04-01
... Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR TRADE ADJUSTMENT ASSISTANCE... goods and personal effects of an individual and family, if any, shall not exceed the maximum number of... include the cost of insuring such goods and effects for their actual value or $10,000, whichever is...
FastTree 2--approximately maximum-likelihood trees for large alignments.
Morgan N Price
Full Text Available BACKGROUND: We recently described FastTree, a tool for inferring phylogenies for alignments with up to hundreds of thousands of sequences. Here, we describe improvements to FastTree that improve its accuracy without sacrificing scalability. METHODOLOGY/PRINCIPAL FINDINGS: Where FastTree 1 used nearest-neighbor interchanges (NNIs and the minimum-evolution criterion to improve the tree, FastTree 2 adds minimum-evolution subtree-pruning-regrafting (SPRs and maximum-likelihood NNIs. FastTree 2 uses heuristics to restrict the search for better trees and estimates a rate of evolution for each site (the "CAT" approximation. Nevertheless, for both simulated and genuine alignments, FastTree 2 is slightly more accurate than a standard implementation of maximum-likelihood NNIs (PhyML 3 with default settings. Although FastTree 2 is not quite as accurate as methods that use maximum-likelihood SPRs, most of the splits that disagree are poorly supported, and for large alignments, FastTree 2 is 100-1,000 times faster. FastTree 2 inferred a topology and likelihood-based local support values for 237,882 distinct 16S ribosomal RNAs on a desktop computer in 22 hours and 5.8 gigabytes of memory. CONCLUSIONS/SIGNIFICANCE: FastTree 2 allows the inference of maximum-likelihood phylogenies for huge alignments. FastTree 2 is freely available at http://www.microbesonline.org/fasttree.
Ground movement at Somma-Vesuvius from Last Glacial Maximum
Marturano, Aldo; Aiello, Giuseppe; Barra, Diana; Fedele, Lorenzo; Morra, Vincenzo
2012-01-01
Detailed micropalaeontological and petrochemical analyses of rock samples from two boreholes drilled at the archaeological excavations of Herculaneum, ~ 7 km west of the Somma -Vesuvius crater, allowed reconstruction of the Late Quaternary palaeoenvironmental evolution of the site. The data provide clear evidence for ground uplift movements involving the studied area. The Holocenic sedimentary sequence on which the archaeological remains of Herculaneum rest has risen several meters at an average rate of ~ 4 mm/yr. The uplift has involved the western apron of the volcano and the Sebeto-Volla Plain, a populous area including the eastern suburbs of Naples. This is consistent with earlier evidence for similar uplift for the areas of Pompeii and Sarno valley (SE of the volcano) and the Somma -Vesuvius eastern apron. An axisimmetric deep source of strain is considered responsible for the long-term uplift affecting the whole Somma -Vesuvius edifice. The deformation pattern can be modeled as a single pressure source, sited in the lower crust and surrounded by a shell of Maxwell viscoelastic medium, which experienced a pressure pulse that began at the Last Glacial Maximum.
Approximate Maximum Likelihood Commercial Bank Loan Management Model
Godwin N.O. Asemota
2009-01-01
Full Text Available Problem statement: Loan management is a very complex and yet, a vitally important aspect of any commercial bank operations. The balance sheet position shows the main sources of funds as deposits and shareholders contributions. Approach: In order to operate profitably, remain solvent and consequently grow, a commercial bank needs to properly manage its excess cash to yield returns in the form of loans. Results: The above are achieved if the bank can honor depositors withdrawals at all times and also grant loans to credible borrowers. This is so because loans are the main portfolios of a commercial bank that yield the highest rate of returns. Commercial banks and the environment in which they operate are dynamic. So, any attempt to model their behavior without including some elements of uncertainty would be less than desirable. The inclusion of uncertainty factor is now possible with the advent of stochastic optimal control theories. Thus, approximate maximum likelihood algorithm with variable forgetting factor was used to model the loan management behavior of a commercial bank in this study. Conclusion: The results showed that uncertainty factor employed in the stochastic modeling, enable us to adaptively control loan demand as well as fluctuating cash balances in the bank. However, this loan model can also visually aid commercial bank managers planning decisions by allowing them to competently determine excess cash and invest this excess cash as loans to earn more assets without jeopardizing public confidence.
45 CFR 74.27 - Allowable costs.
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false Allowable costs. 74.27 Section 74.27 Public..., AND COMMERCIAL ORGANIZATIONS Post-Award Requirements Financial and Program Management § 74.27 Allowable costs. (a) For each kind of recipient, there is a particular set of Federal principles...
28 CFR 100.11 - Allowable costs.
2010-07-01
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Allowable costs. 100.11 Section 100.11 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) COST RECOVERY REGULATIONS, COMMUNICATIONS ASSISTANCE FOR LAW ENFORCEMENT ACT OF 1994 § 100.11 Allowable costs. (a) Costs that are eligible...
20 CFR 633.303 - Allowable costs.
2010-04-01
... occupation trained for and at not less than the wage specified in the agreement. (g) Travel costs. (1) The... to the overall administrative cost ceiling. (i) Allowances and reimbursements for board and advisory... grantee per quarter. (2) Allowances and loss of wages. Any individual or family member who is a member of...
75 FR 4098 - Utility Allowance Adjustments
2010-01-26
... URBAN DEVELOPMENT Utility Allowance Adjustments AGENCY: Office of the Chief Information Officer, HUD... are required to advise the Secretary of the need for and request of a new utility allowance for... whether the information will have practical utility; (2) Evaluate the accuracy of the agency's estimate...
44 CFR 13.22 - Allowable costs.
2010-10-01
... uniform cost accounting standards that comply with cost principles acceptable to the Federal agency. ... STATE AND LOCAL GOVERNMENTS Post-Award Requirements Financial Administration § 13.22 Allowable costs. (a... increment above allowable costs) to the grantee or subgrantee. (b) Applicable cost principles. For each...
32 CFR 33.22 - Allowable costs.
2010-07-01
... accounting standards that comply with cost principles acceptable to the Federal agency. ... Post-Award Requirements Financial Administration § 33.22 Allowable costs. (a) Limitation on use of... allowable costs) to the grantee or subgrantee. (b) Applicable cost principles. For each kind of...
36 CFR 1207.22 - Allowable costs.
2010-07-01
... uniform cost accounting standards that comply with cost principles acceptable to the Federal agency. ... GOVERNMENTS Post-Award Requirements Financial Administration § 1207.22 Allowable costs. (a) Limitation on use... increment above allowable costs) to the grantee or subgrantee. (b) Applicable cost principles. For each...
34 CFR 74.27 - Allowable costs.
2010-07-01
... Procedures or uniform cost accounting standards that comply with cost principles acceptable to ED. (b) The... OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial... principles for determining allowable costs. Allowability of costs are determined in accordance with the...
5 CFR 180.104 - Allowable claims.
2010-01-01
... mobile homes may be allowed only in cases of collision, theft, or vandalism. (5) Money. Claims for money... claimant's supervisor. (4) Mobile homes. Claims may be allowed for damage to or loss of mobile homes and their contents under the provisions of § 180.104(c)(2). Claims for structural damage to mobile...
38 CFR 49.27 - Allowable costs.
2010-07-01
... ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial and Program Management § 49.27 Allowable...-Profit Organizations.” The allowability of costs incurred by institutions of higher education...
20 CFR 435.27 - Allowable costs.
2010-04-01
... AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, OTHER NON-PROFIT ORGANIZATIONS, AND COMMERCIAL ORGANIZATIONS Post-Award Requirements Financial and Program Management § 435.27 Allowable costs. For each kind... Organizations.” (c) Allowability of costs incurred by institutions of higher education is determined...
28 CFR 70.27 - Allowable costs.
2010-07-01
... AND AGREEMENTS (INCLUDING SUBAWARDS) WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial and Program Management § 70.27 Allowable costs. (a... Organizations.” The allowability of costs incurred by institutions of higher education is determined...
15 CFR 14.27 - Allowable costs.
2010-01-01
... GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, OTHER NON-PROFIT, AND COMMERCIAL ORGANIZATIONS Post-Award Requirements Financial and Program Management § 14.27 Allowable costs. For each kind of... Organizations.” The allowability of costs incurred by institutions of higher education is determined...
24 CFR 17.43 - Allowable claims.
2010-04-01
... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Allowable claims. 17.43 Section 17.43 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development..., superior authority. (6) Clothing and accessories. Claims may be allowed for damage to, or loss of, clothing...
29 CFR 1470.22 - Allowable costs.
2010-07-01
... to that circular 48 CFR part 31. Contract Cost Principles and Procedures, or uniform cost accounting... grantee or subgrantee. (b) Applicable cost principles. For each kind of organization, there is a set of Federal principles for determining allowable costs. Allowable costs will be determined in accordance...
45 CFR 2541.220 - Allowable costs.
2010-10-01
... accounting standards that comply with cost principles acceptable to the Federal agency. ... the grantee or subgrantee. (b) Applicable cost principles. For each kind of organization, there is a set of Federal principles for determining allowable costs. Allowable costs will be determined...
郇浩; 陶选如; 陶然; 程小康; 董朝; 李鹏飞
2014-01-01
To reach a compromise between efficient dynamic performance and high tracking accuracy of carrier tracking loop in high-dynamic circumstance which results in large Doppler frequency and Doppler frequency rate-of-change, a fast maximum likelihood estimation method of Doppler frequency rate-of-change is proposed in this paper, and the estimation value is utilized to aid the carrier tracking loop. First, it is pointed out that the maximum likelihood estimation method of Doppler frequency and Doppler frequency rate-of-change is equivalent to the Fractional Fourier Fransform (FrFT). Second, the estimation method of Doppler frequency rate-of-change, which combines the instant self-correlation and the segmental Discrete Fourier Transform (DFT) is proposed to solve the large two-dimensional search calculation amount of the Doppler frequency and Doppler frequency rate-of-change, and the received coarse estimation value is applied to narrow down the search range. Finally, the estimation value is used in the carrier tracking loop to reduce the dynamic stress and improve the tracking accuracy. Theoretical analysis and computer simulation show that the search calculation amount falls to 5.25 percent of the original amount with Signal to Noise Ratio (SNR)-30 dB, and the Root Mean Sguare Error(RMSE) of frequency tracked is only 8.46 Hz/s, compared with the traditional carrier tracking method the tracking sensitivity can be improved more than 3 dB.%高动态环境下接收信号含有较大的多普勒频率及其变化率，传统载波跟踪方法难以在高动态应力和跟踪精度两方面取得较好折中，针对这一问题该文提出一种多普勒频率变化率快速最大似然估计方法，并利用估计值辅助载波跟踪环路。首先指出了多普勒频率及其变化率的最大似然估计可等效采用分数阶傅里叶变换(FrFT)来实现；其次，针对频率及其变化率2维搜索运算量大的问题，提出一种瞬时自相关与分段离
Comparison Between Bayesian and Maximum Entropy Analyses of Flow Networks†
Steven H. Waldrip
2017-02-01
Full Text Available We compare the application of Bayesian inference and the maximum entropy (MaxEnt method for the analysis of ﬂow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of ﬂow rates and other variables, when there is insufﬁcient information to obtain a deterministic solution, and also allow the effects of uncertainty to be included. Both methods of inference update a prior to a posterior probability density function (pdf by the inclusion of new information, in the form of data or constraints. The MaxEnt method maximises an entropy function subject to constraints, using the method of Lagrange multipliers,to give the posterior, while the Bayesian method ﬁnds its posterior by multiplying the prior with likelihood functions incorporating the measured data. In this study, we examine MaxEnt using soft constraints, either included in the prior or as probabilistic constraints, in addition to standard moment constraints. We show that when the prior is Gaussian,both Bayesian inference and the MaxEnt method with soft prior constraints give the same posterior means, but their covariances are different. In the Bayesian method, the interactions between variables are applied through the likelihood function, using second or higher-order cross-terms within the posterior pdf. In contrast, the MaxEnt method incorporates interactions between variables using Lagrange multipliers, avoiding second-order correlation terms in the posterior covariance. The MaxEnt method with soft prior constraints, therefore, has a numerical advantage over Bayesian inference, in that the covariance terms are avoided in its integrations. The second MaxEnt method with soft probabilistic constraints is shown to give posterior means of similar, but not identical, structure to the other two methods, due to its different formulation.
Keynes, family allowances and Keynesian economic policy
Pressman, Steven
2014-01-01
This paper provides a short history of family allowances and documents the fact that Keynes supported family allowances as early as the 1920s, continuing through the 1930s and early 1940s. Keynes saw this policy as a way to help households raise their children and also as a way to increase consumption without reducing business investment. The paper goes on to argue that a policy of family allowances is consistent with Keynesian economics. Finally, the paper uses the Luxembourg Income Study to...
Prediction of three dimensional maximum isometric neck strength.
Fice, Jason B; Siegmund, Gunter P; Blouin, Jean-Sébastien
2014-09-01
We measured maximum isometric neck strength under combinations of flexion/extension, lateral bending and axial rotation to determine whether neck strength in three dimensions (3D) can be predicted from principal axes strength. This would allow biomechanical modelers to validate their neck models across many directions using only principal axis strength data. Maximum isometric neck moments were measured in 9 male volunteers (29±9 years) for 17 directions. The 3D moments were normalized by the principal axis moments, and compared to unity for all directions tested. Finally, each subject's maximum principal axis moments were used to predict their resultant moment in the off-axis directions. Maximum moments were 30±6 N m in flexion, 32±9 N m in lateral bending, 51±11 N m in extension, and 13±5 N m in axial rotation. The normalized 3D moments were not significantly different from unity (95% confidence interval contained one), except for three directions that combined ipsilateral axial rotation and lateral bending; in these directions the normalized moments exceeded one. Predicted resultant moments compared well to the actual measured values (r2=0.88). Despite exceeding unity, the normalized moments were consistent across subjects to allow prediction of maximum 3D neck strength using principal axes neck strength.
40 CFR 30.27 - Allowable costs.
2010-07-01
... ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER... designated individuals with specialized skills who are paid at a daily or hourly rate. This rate does not...
Suenimeire Vieira
2012-12-01
Full Text Available INTRODUÇÃO: Um dos benefícios promovidos pelo exercício físico parece ser a melhora da modulação do sistema nervoso autônomo sobre o coração. No entanto, o papel da atividade física como um fator determinante da variabilidade da frequência cardíaca (VFC não está bem estabelecido. Desta forma, o objetivo do estudo foi verificar se há correlação entre a frequência cardíaca de repouso e a carga máxima atingida no teste de esforço físico com os índices de VFC em homens idosos. MÉTODOS: Foram estudados 18 homens idosos com idades entre 60 e 70 anos. Foram feitas as seguintes avaliações: a teste de esforço máximo em cicloergômetro utilizando-se o protocolo de Balke para avaliação da capacidade aeróbia; b registro da frequência cardíaca (FC e dos intervalos R-R durante 15 minutos na condição de repouso em decúbito dorsal. Após a coleta, os dados foram analisados no domínio do tempo, calculando-se o índice RMSSD, e no domínio da frequência, calculando-se os índices de baixa frequência (BF, alta frequência (AF e razão BF/AF. Para verificar se existe associação entre a carga máxima atingida no teste de esforço e os índices de VFC foi aplicado o teste de correlação de Pearson (p 0,05. CONCLUSÃO: Os índices de variabilidade da frequência cardíaca temporal e espectrais estudados não são indicadores do nível de capacidade aeróbia de homens idosos avaliados em cicloergômetro.INTRODUCTION: One of the benefits provided by regular physical activities seems to be the improvement of cardiac autonomic nervous system modulation. However, the role of physical activity as a determinant factor of the heart rate variability (HRV is not well-established. Therefore, the aim of this study was to verify whether there was a correlation between resting heart rate and maximum workload reached in an exercise test with HRV indices in elderly men. METHODS: A study was carried out with 18 elderly men between the ages of
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Sign Patterns That Allow the Given Matrix
邵燕灵; 孙良
2003-01-01
Let P be a property referring to a real matrix. For a sign pattern A, if there exists a real matrix B in the qualitative class of A such that B has property P, then we say A allows P. Three cases that A allows an M-matrix, an inverse M-matrix and a P0-matrix are considered. The complete characterizations are obtained.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Regulatory treatment of allowances and compliance costs
Rose, K. [National Regulatory Research Institute, Columbus, OH (United States)
1993-07-01
The Clean Air Act Amendments of 1990 (CAAA) established a national emission allowance trading system, a market-based form of environmental regulation designed to reduce and limit sulfur dioxide emissions. However, the allowance trading system is being applied primarily to an economically regulated electric utility industry. The combining of the new form of environmental regulation and economic regulation of electric utilities has raised a number of questions including what the role should be of the federal and state utility regulating commissions and how those actions will affect the decision making process of the utilities and the allowance market. There are several dimensions to the regulatory problems that commissions face. Allowances and utility compliance expenditures have implications for least-cost/IPR (integrated resource planning), prudence review procedures, holding company and multistate utility regulation and ratemaking treatment. The focus of this paper is on the ratemaking treatment. The following topics are covered: ratemaking treatment of allowances and compliance costs; Traditional cost-recovery mechanisms; limitations to the traditional approach; traditional approach and the allowance trading market; market-based cost recovery mechanisms; methods of determining the benchmark; determining the split between ratepayers and the utility; other regulatory approaches; limitations of incentive mechanisms.
Ziheng YANG
2004-01-01
众所周知,物种分化年代的估计对分子钟(进化速率恒定)假定很敏感.另一方面,在远缘物种(例如哺乳纲不同目的动物)的比较中,分子钟几乎总是不成立的.这样在估计分化时间时考虑不同进化区系的速率差异至为重要.最大似然法可以很自然地考虑这种速率差异,并且可以同时分析多个基因位点的资料以及同时利用多重化石校正数据.以前提出的似然法需要研究者将进化树的树枝按速率分组,本文提出一个近似方法以使这个过程自动化.本方法综合了以前的似然法、贝斯法及近似速率平滑法的一些特征.此外,还对算法加以改进,以适应综合数据分析时某些基因在某些物种中缺乏资料的情形.应用新提出的方法来分析马达加斯加的倭狐猴的分化年代,并与以前的似然法及贝斯法的分析进行了比较[动物学报50(4):645-656,2004].%Estimation of species divergence times is well-known to be sensitive to violation of the molecular clock assumption (rate constancy over time). However, the molecular clock is almost always violated in comparisons of distantly related species, such as different orders of mammals. Thus it is important to take into account different rates among lineages when divergence times are estimated. The maximum likelihood method provides a framework for accommodating rate variation and can naturally accommodate heterogeneous datasets from multiple loci and fossil calibrations at multiple nodes.Previous implementations of the likelihood method require the researcher to assign branches to different rate classes. In this paper, I implement a heuristic rate-smoothing algorithm (the AHRS algorithm) to automate the assignment of branches to rate groups. The method combines features of previous likelihood, Bayesian and rate-smoothing methods. The likelihood algorithm is also improved to accommodate missing sequences at some loci in the combined analysis. The new
Generalized Relativistic Wave Equations with Intrinsic Maximum Momentum
Ching, Chee Leong
2013-01-01
We examine the nonperturbative effect of maximum momentum on the relativistic wave equations. In momentum representation, we obtain the exact eigen-energies and wavefunctions of one-dimensional Klein-Gordon and Dirac equation with linear confining potentials, and the Dirac oscillator. Bound state solutions are only possible when the strength of scalar potential are stronger than vector potential. The energy spectrum of the systems studied are bounded from above, whereby classical characteristics are observed in the uncertainties of position and momentum operators. Also, there is a truncation in the maximum number of bound states that is allowed. Some of these quantum-gravitational features may have future applications.
Generalized relativistic wave equations with intrinsic maximum momentum
Ching, Chee Leong; Ng, Wei Khim
2014-05-01
We examine the nonperturbative effect of maximum momentum on the relativistic wave equations. In momentum representation, we obtain the exact eigen-energies and wave functions of one-dimensional Klein-Gordon and Dirac equation with linear confining potentials, and the Dirac oscillator. Bound state solutions are only possible when the strength of scalar potential is stronger than vector potential. The energy spectrum of the systems studied is bounded from above, whereby classical characteristics are observed in the uncertainties of position and momentum operators. Also, there is a truncation in the maximum number of bound states that is allowed. Some of these quantum-gravitational features may have future applications.
A maximum in the strength of nanocrystalline copper
Schiøtz, Jakob; Jacobsen, Karsten Wedel
2003-01-01
We used molecular dynamics simulations with system sizes up to 100 million atoms to simulate plastic deformation of nanocrystalline copper. By varying the grain size between 5 and 50 nanometers, we show that the flow stress and thus the strength exhibit a maximum at a grain size of 10 to 15...... nanometers. This maximum is because of a shift in the microscopic deformation mechanism from dislocation-mediated plasticity in the coarse-grained material to grain boundary sliding in the nanocrystalline region. The simulations allow us to observe the mechanisms behind the grain-size dependence...
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
The maximum force in a column under constant speed compression
Kuzkin, Vitaly A
2015-01-01
Dynamic buckling of an elastic column under compression at constant speed is investigated assuming the first-mode buckling. Two cases are considered: (i) an imperfect column (Hoff's statement), and (ii) a perfect column having an initial lateral deflection. The range of parameters, where the maximum load supported by a column exceeds Euler static force is determined. In this range, the maximum load is represented as a function of the compression rate, slenderness ratio, and imperfection/initial deflection. Considering the results we answer the following question: "How slowly the column should be compressed in order to measure static load-bearing capacity?" This question is important for the proper setup of laboratory experiments and computer simulations of buckling. Additionally, it is shown that the behavior of a perfect column having an initial deflection differ significantlys form the behavior of an imperfect column. In particular, the dependence of the maximum force on the compression rate is non-monotoni...
Prediction of Maximum Oxygen Consumption from Walking, Jogging, or Running.
Larsen, Gary E.; George, James D.; Alexander, Jeffrey L.; Fellingham, Gilbert W.; Aldana, Steve G.; Parcell, Allen C.
2002-01-01
Developed a cardiorespiratory endurance test that retained the inherent advantages of submaximal testing while eliminating reliance on heart rate measurement in predicting maximum oxygen uptake (VO2max). College students completed three exercise tests. The 1.5-mile endurance test predicted VO2max from submaximal exercise without requiring heart…
Prediction of Maximum Oxygen Consumption from Walking, Jogging, or Running.
Larsen, Gary E.; George, James D.; Alexander, Jeffrey L.; Fellingham, Gilbert W.; Aldana, Steve G.; Parcell, Allen C.
2002-01-01
Developed a cardiorespiratory endurance test that retained the inherent advantages of submaximal testing while eliminating reliance on heart rate measurement in predicting maximum oxygen uptake (VO2max). College students completed three exercise tests. The 1.5-mile endurance test predicted VO2max from submaximal exercise without requiring heart…
Closed form maximum likelihood estimator of conditional random fields
Zhu, Zhemin; Hiemstra, Djoerd; Apers, Peter M.G.; Wombacher, Andreas
2013-01-01
Training Conditional Random Fields (CRFs) can be very slow for big data. In this paper, we present a new training method for CRFs called {\\em Empirical Training} which is motivated by the concept of co-occurrence rate. We show that the standard training (unregularized) can have many maximum likeliho
A Monte Carlo Evaluation of Maximum Likelihood Multidimensional Scaling Methods
Bijmolt, T.H.A.; Wedel, M.
1996-01-01
We compare three alternative Maximum Likelihood Multidimensional Scaling methods for pairwise dissimilarity ratings, namely MULTISCALE, MAXSCAL, and PROSCAL in a Monte Carlo study.The three MLMDS methods recover the true con gurations very well.The recovery of the true dimensionality depends on the
45 CFR 34.4 - Allowable claims.
2010-10-01
... government-owned or operated parking lot or garage incident to employment. This subsection does not include... amount allowed is the value of the vehicle at the time of loss as determined by the National Automobile.... Damage or loss of personal property, including baggage and household items, while being transported by...
45 CFR 2543.27 - Allowable costs.
2010-10-01
... Welfare Regulations Relating to Public Welfare (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT... Organizations.” The allowability of costs incurred by institutions of higher education is determined in...
34 CFR 80.22 - Allowable costs.
2010-07-01
... CFR part 31. Contract Cost Principles and Procedures, or uniform cost accounting standards that comply... COOPERATIVE AGREEMENTS TO STATE AND LOCAL GOVERNMENTS Post-Award Requirements Financial Administration § 80.22... kind of organization, there is a set of Federal principles for determining allowable costs. For...
13 CFR 143.22 - Allowable costs.
2010-01-01
... to that circular 48 CFR part 31. Contract Cost Principles and Procedures, or uniform cost accounting... Financial Administration § 143.22 Allowable costs. (a) Limitation on use of funds. Grant funds may be used... grantee or subgrantee. (b) Applicable cost principles. For each kind of organization, there is a set...
38 CFR 43.22 - Allowable costs.
2010-07-01
... accounting standards that comply with cost principles acceptable to the Federal agency. ... Requirements Financial Administration § 43.22 Allowable costs. (a) Limitation on use of funds. Grant funds may... the grantee or subgrantee. (b) Applicable cost principles. For each kind of organization, there is...
22 CFR 135.22 - Allowable costs.
2010-04-01
... Procedures, or uniform cost accounting standards that comply with cost principles acceptable to the Federal... AGREEMENTS TO STATE AND LOCAL GOVERNMENTS Post-Award Requirements Financial Administration § 135.22 Allowable... principles. For each kind of organization, there is a set of Federal principles for determining...
40 CFR 31.22 - Allowable costs.
2010-07-01
... accounting standards that comply with cost principles acceptable to the Federal agency. ... Requirements Financial Administration § 31.22 Allowable costs. (a) Limitation on use of funds. Grant funds may... the grantee or sub-grantee. (b) Applicable cost principles. For each kind of organization, there is...
45 CFR 92.22 - Allowable costs.
2010-10-01
... to that circular 48 CFR Part 31. Contract Cost Principles and Procedures, or uniform cost accounting... Financial Administration § 92.22 Allowable costs. (a) Limitation on use of funds. Grant funds may be used... grantee or subgrantee. (b) Applicable cost principles. For each kind of organization, there is a set...
7 CFR 550.25 - Allowable costs.
2010-01-01
... Regulations of the Department of Agriculture (Continued) AGRICULTURAL RESEARCH SERVICE, DEPARTMENT OF... Financial Management § 550.25 Allowable costs. For each kind of Cooperator, there is a set of Federal... Acquisition Regulation (FAR) at 48 CFR part 31. Program Management ...
22 CFR 145.27 - Allowable costs.
2010-04-01
... Relations DEPARTMENT OF STATE CIVIL RIGHTS GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial and Program Management § 145...-Profit Organizations.” The allowability of costs incurred by institutions of higher education...
22 CFR 518.27 - Allowable costs.
2010-04-01
... INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial and Program Management § 518.27 Allowable costs. For each kind of recipient, there is a set of... by institutions of higher education is determined in accordance with the provisions of OMB Circular...
36 CFR 1210.27 - Allowable costs.
2010-07-01
... RULES UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial and Program Management § 1210.27 Allowable costs. For each kind of recipient, there is a set of Federal principles...
Making It Personal: Per Capita Carbon Allowances
Fawcett, Tina; Hvelplund, Frede; Meyer, Niels I
2009-01-01
The Chapter highligts the importance of introducing new, efficient schemes for mitigation of global warming. One such scheme is Personal Carbon Allowances (PCA), whereby individuals are allotted a tradable ration of CO2 emission per year.This chapter reviews the fundamentals of PCA and analyzes its...
Judicial Deference Allows European Consensus to Emerge
Dothan, Shai
2017-01-01
conceived as competing doctrines: the more there is of one, the less there is of another. This paper suggests a novel rationale for the emerging consensus doctrine: the doctrine can allow the ECHR to make good policies by drawing on the independent decision-making of many similar countries. In light of that...
77 FR 34218 - Clothing Allowance; Correction
2012-06-11
... construed to impose a restriction that VA did not intend. This document corrects that error. DATES: This... Service, Veterans Benefits Administration, Department of Veterans Affairs, 810 Vermont Avenue NW... medication would be eligible for a clothing allowance for each such appliance or medication if each...
49 CFR 266.11 - Allowable costs.
2010-10-01
... Management Circular 74-4; and costs of projects eligible under § 266.7 of this part. All allowable costs shall be authorized by a fully executed grant agreement. A State may incur costs prior to the execution... need to incur costs prior to the execution of a grant agreement, has authorized the costs in writing...
33 CFR 136.211 - Compensation allowable.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Compensation allowable. 136.211 Section 136.211 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... PROCEDURES; DESIGNATION OF SOURCE; AND ADVERTISEMENT Procedures for Particular Claims § 136.211...
33 CFR 136.205 - Compensation allowable.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Compensation allowable. 136.205 Section 136.205 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... PROCEDURES; DESIGNATION OF SOURCE; AND ADVERTISEMENT Procedures for Particular Claims § 136.205...
33 CFR 136.241 - Compensation allowable.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Compensation allowable. 136.241 Section 136.241 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... PROCEDURES; DESIGNATION OF SOURCE; AND ADVERTISEMENT Procedures for Particular Claims § 136.241...
33 CFR 136.223 - Compensation allowable.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Compensation allowable. 136.223 Section 136.223 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... PROCEDURES; DESIGNATION OF SOURCE; AND ADVERTISEMENT Procedures for Particular Claims § 136.223...
33 CFR 136.217 - Compensation allowable.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Compensation allowable. 136.217 Section 136.217 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... PROCEDURES; DESIGNATION OF SOURCE; AND ADVERTISEMENT Procedures for Particular Claims § 136.217...
33 CFR 136.235 - Compensation allowable.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Compensation allowable. 136.235 Section 136.235 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... PROCEDURES; DESIGNATION OF SOURCE; AND ADVERTISEMENT Procedures for Particular Claims § 136.235...
33 CFR 136.229 - Compensation allowable.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Compensation allowable. 136.229 Section 136.229 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... PROCEDURES; DESIGNATION OF SOURCE; AND ADVERTISEMENT Procedures for Particular Claims § 136.229...
50 CFR 80.15 - Allowable costs.
2010-10-01
...) FINANCIAL ASSISTANCE-WILDLIFE SPORT FISH RESTORATION PROGRAM ADMINISTRATIVE REQUIREMENTS, PITTMAN-ROBERTSON WILDLIFE RESTORATION AND DINGELL-JOHNSON SPORT FISH RESTORATION ACTS § 80.15 Allowable costs. (a) What are... designed to include purposes other than those eligible under either the Dingell-Johnson Sport Fish...
43 CFR 12.62 - Allowable costs.
2010-10-01
... Public Lands: Interior Office of the Secretary of the Interior ADMINISTRATIVE AND AUDIT REQUIREMENTS AND COST PRINCIPLES FOR ASSISTANCE PROGRAMS Uniform Administrative Requirements for Grants and Cooperative... increment above allowable costs) to the grantee or subgrantee. (b) Applicable cost principles. For each...
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Boundary condition effects on maximum groundwater withdrawal in coastal aquifers.
Lu, Chunhui; Chen, Yiming; Luo, Jian
2012-01-01
Prevention of sea water intrusion in coastal aquifers subject to groundwater withdrawal requires optimization of well pumping rates to maximize the water supply while avoiding sea water intrusion. Boundary conditions and the aquifer domain size have significant influences on simulating flow and concentration fields and estimating maximum pumping rates. In this study, an analytical solution is derived based on the potential-flow theory for evaluating maximum groundwater pumping rates in a domain with a constant hydraulic head landward boundary. An empirical correction factor, which was introduced by Pool and Carrera (2011) to account for mixing in the case with a constant recharge rate boundary condition, is found also applicable for the case with a constant hydraulic head boundary condition, and therefore greatly improves the usefulness of the sharp-interface analytical solution. Comparing with the solution for a constant recharge rate boundary, we find that a constant hydraulic head boundary often yields larger estimations of the maximum pumping rate and when the domain size is five times greater than the distance between the well and the coastline, the effect of setting different landward boundary conditions becomes insignificant with a relative difference between two solutions less than 2.5%. These findings can serve as a preliminary guidance for conducting numerical simulations and designing tank-scale laboratory experiments for studying groundwater withdrawal problems in coastal aquifers with minimized boundary condition effects.
ON A GENERALIZATION OF THE MAXIMUM ENTROPY THEOREM OF BURG
JOSÉ MARCANO
2017-01-01
Full Text Available In this article we introduce some matrix manipulations that allow us to obtain a version of the original Christoffel-Darboux formula, which is of interest in many applications of linear algebra. Using these developments matrix and Jensen’s inequality, we obtain the main result of this proposal, which is the generalization of the maximum entropy theorem of Burg for multivariate processes.
Alexandre Lima dos Santos
2005-06-01
atletismo y protocolos maximos en rampa de laboratorio. Todos los tests fueron hechos en intervalos de dos semanas en un orden alternado para cada individuo. antes de cada test eran asignadas la humedad de aire y temperatura ambiente. En las 48 horas precedentes los individuos fueron instruidos a no realizar ninguna actividad fisica. Posibles diferencias en la fc pico, en condindiones ambientales (temperatura y humedad relativa del aire en campo y en laboratorio fueron testeados por el test t de Student emparejado y simple respectivamente (p BACKGROUND AND OBJECTIVE: The peak heart rate (HRpeak assessed in maximum laboratory tests has been used to determine the aerobic exercise intensity in field situations. However, HRpeak values may differ in field and laboratory situations, which can influence the relative intensity of the prescribed workloads. The objective of this study was to measure the HRpeak responses in laboratory and field maximum tests, analyzing their influence in the exercise prescription. METHODS: Twenty-five physically active men aged 21-51 yrs (28.9 ± 8 yrs executed a 2,400 m field test in a running track and an individualized maximum treadmill ramp protocol. All tests were performed within two weeks, in a counterbalanced order. Before each test, the temperature and air humidity were checked, and the subjects were told no to engage in any physical activity 48 hours before. Differences between HRpeak and environmental conditions (temperature and humidity in field and laboratory situations were respectively tested by paired and simple Student's t tests (p < 0.05. RESULTS: HRpeak values were significant higher in the field test than in the laboratory protocol, reaching 10 beats per minute in some cases. These differences may be partially accounted for a significant higher temperature and air humidity in the field conditions. CONCLUSION: In conclusion, maximum field tests seem to elicit higher HRpeak values than laboratory protocols, suggesting that the former
Realization of allowable qeneralized quantum gates
无
2010-01-01
The most general duality gates were introduced by Long,Liu and Wang and named allowable generalized quantum gates (AGQGs,for short).By definition,an allowable generalized quantum gate has the form of U=YfkjsckUK,where Uk’s are unitary operators on a Hilbert space H and the coefficients ck’s are complex numbers with |Yfijo ck\\ ∧ 1 an d 1ck| <1 for all k=0,1,...,d-1.In this paper,we prove that an AGQG U=YfkZo ck∧k is realizable,i.e.there are two d by d unitary matrices W and V such that ck=W0kVk0 (0
Making It Personal: Per Capita Carbon Allowances
Fawcett, Tina; Hvelplund, Frede; Meyer, Niels I
2009-01-01
The Chapter highligts the importance of introducing new, efficient schemes for mitigation of global warming. One such scheme is Personal Carbon Allowances (PCA), whereby individuals are allotted a tradable ration of CO2 emission per year.This chapter reviews the fundamentals of PCA and analyzes its...... merits and problems. The United Kingdom and Denmark have been chosen as case studies because the energy situation and the institutional setup are quite different between the two countries....
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
On the maximum grain size entrained by photoevaporative winds
Hutchison, Mark A; Maddison, Sarah T
2016-01-01
We model the behaviour of dust grains entrained by photoevaporation-driven winds from protoplanetary discs assuming a non-rotating, plane-parallel disc. We obtain an analytic expression for the maximum entrainable grain size in extreme-UV radiation-driven winds, which we demonstrate to be proportional to the mass loss rate of the disc. When compared with our hydrodynamic simulations, the model reproduces almost all of the wind properties for the gas and dust. In typical turbulent discs, the entrained grain sizes in the wind are smaller than the theoretical maximum everywhere but the inner disc due to dust settling.
Irani Lauer Lellis
2012-06-01
Full Text Available The practice of giving allowance is used by several parents in different parts of the world and can contribute to the economic education of children. This study aimed to investigate the purposes of the allowance with 32 parents of varying incomes. We used the focus group technique and Alceste software to analyze the data. The results involved two classes related to the process of using the allowance. These classes have covered aspects of the role of socialization and education allowance, serving as an instrument of reward, but sometimes encouraging bad habits in children. The justification of the fathers concerning the amount of money to be given to the children and when to stop giving allowance were also highlighted. Keywords: allowance; economic socialization; parenting practices.
41 CFR 301-11.27 - Are taxes included in the lodging portion of the Government per diem rate?
2010-07-01
... 41 Public Contracts and Property Management 4 2010-07-01 2010-07-01 false Are taxes included in... Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES ALLOWABLE... you a maximum lodging rate of $50 per night, and you elect to stay at a hotel that costs $100 per...
The evolution of maximum body size of terrestrial mammals.
Smith, Felisa A; Boyer, Alison G; Brown, James H; Costa, Daniel P; Dayan, Tamar; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; McCain, Christy; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D
2010-11-26
The extinction of dinosaurs at the Cretaceous/Paleogene (K/Pg) boundary was the seminal event that opened the door for the subsequent diversification of terrestrial mammals. Our compilation of maximum body size at the ordinal level by sub-epoch shows a near-exponential increase after the K/Pg. On each continent, the maximum size of mammals leveled off after 40 million years ago and thereafter remained approximately constant. There was remarkable congruence in the rate, trajectory, and upper limit across continents, orders, and trophic guilds, despite differences in geological and climatic history, turnover of lineages, and ecological variation. Our analysis suggests that although the primary driver for the evolution of giant mammals was diversification to fill ecological niches, environmental temperature and land area may have ultimately constrained the maximum size achieved.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
R Wave Extraction Based on the Maximum First Derivative plus the Maximum Value of the Double Search
Wen-po Yao; Wen-li Yao; Min Wu; Tie-bing Liu
2016-01-01
R-wave detection is the main approach for heart rate variability analysis and clinical application based on R-R interval. The maximum ifrst derivative plus the maximum value of the double search algorithm is applied on electrocardiogram (ECG) of MIH-BIT Arrhythmia Database to extract R wave. Through the study of algorithm's characteristics and R-wave detection method, data segmentation method is modified to improve the detection accuracy. After segmentation modification, average accuracy rate of 6 sets of short ECG data increase from 82.51% to 93.70%, and the average accuracy rate of 11 groups long-range data is 96.61%. Test results prove that the algorithm and segmentation method can accurately locate R wave and have good effectiveness and versatility, but may exist some undetected problems due to algorithm implementation.
Ignacio Jesús Chirosa Ríos
2011-05-01
Full Text Available
Resumen
En el presente estudio se propone una ecuación logarítmica para el cálculo de la frecuencia cardiaca máxima (FC máx de forma indirecta en jugadores de deportes de equipo en situaciones integradas de juego. La muestra experimental estuvo formada por trece jugadores (24± 3 años pertenecientes a un equipo de División de Honor B de balonmano. Se midió la FC máx inicialmente por medio de la prueba de Course Navette. Posteriormente, se realizaron veintiuna sesiones de entrenamiento en las que se registró la FC, de forma continua, y la percepción subjetiva del esfuerzo (RPE, en cada tarea. Se realizó un análisis de regresión lineal que permitió encontrar una ecuación de predicción de la FC máx. a partir de las frecuencias cardiacas máximas de las tres sesiones de mayor intensidad. Los datos previstos por esta ecuación correlacionan significativamente con los datos obtenidos en el Course Navette y tienen menor error típico de medida que otros métodos de cálculo. Como conclusión principal se destaca que esta ecuación posibilita una manera útil y cómoda del cálculo de FC máx en situaciones reales de juego, evitándose la realización de test analíticos no específicos y, de este modo, reducir la falta de ecología en la valoración funcional.
Palabras clave: control del entrenamiento, valoración funcional, fórmula predictiva
Abstract
This research developed a logarithms for calculating the maximum heart rate (max. HR for players in team sports in game situations. The sample was made of thirteen players (aged 24 ± 3 to a Division Two Handball team. HR was initially measured by Course Navette test. Later, twenty one training sessions were conducted in which HR and Rate of Perceived Exertion (RPE, were continuously monitored, in each task. A
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
2011-03-24
... http://www.gsa.gov/relocationpolicy . Dated: March 21, 2011. Janet Dobbs, Director, Office of Travel... ADMINISTRATION Federal Travel Regulation (FTR); Relocation Allowances-- Relocation Income Tax Allowance (RITA... effective March 24, 2011. FOR FURTHER INFORMATION CONTACT: Mr. Ed Davis, Office of Governmentwide Policy...
2011-06-06
... Duty (TDY) Travel Allowances (Taxes); Relocation Allowances (Taxes) AGENCY: Office of Governmentwide... extended temporary duty (TDY) benefits to correct errors and to align that process with the proposed... incurred by employees as a result of relocation and to reimburse ``all'' of the taxes imposed on any...
Kleidon, Axel
2009-06-01
The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future.
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Maximum number of habitable planets at the time of Earth's origin: new hints for panspermia?
von Bloh, Werner; Franck, Siegfried; Bounama, Christine; Schellnhuber, Hans-Joachim
2003-04-01
New discoveries have fuelled the ongoing discussion of panspermia, i.e. the transport of life from one planet to another within the solar system (interplanetary panspermia) or even between different planetary systems (interstellar panspermia). The main factor for the probability of interstellar panspermia is the average density of stellar systems containing habitable planets. The combination of recent results for the formation rate of Earth-like planets with our estimations of extrasolar habitable zones allows us to determine the number of habitable planets in the Milky Way over cosmological time scales. We find that there was a maximum number of habitable planets around the time of Earth's origin. If at all, interstellar panspermia was most probable at that time and may have kick-started life on our planet.
文建国; 崔林刚; 孟庆军; 任川川; 李金升; 吕宇涛; 张艳
2012-01-01
目的 比较尿流加速度(UFA)和最大尿流率(Qmax)诊断膀胱出口梗阻(BOO)的价值. 方法 分别选取50例前列腺增生(BPH)患者和50例健康者进行前列腺体积、UFA和Qmax测定.以P-Q图梗阻区作为参考标准,比较UFA和Qmax诊断BOO的灵敏度和特异性. 结果 BPH组UFA明显低于非BPH组(P＜0.05).以UFA＜2 ml/s2和Qmax＜10 ml/s作为诊断BOO参考标准,灵敏度和特异度分别为88％、75％与81％、63％,与参考标准P-Q图提示梗阻一致性分析Kappa值分别为0.55比0.35. 结论 UFA可以作为诊断BPH患者BOO的依据之一.%Objective To assess the value of the urine flow acceleration(UFA)versus maximum urinary flow rate (Qmax) for diagnosis of bladder outlet obstruction (BOO) in benign prostate hyperplasia (BPH).Methods A total of 50 men with BPH and 50 normal men were included in this study.Urodynamic examinations were performed in all patients according to the recommendations of the International Continence Society.Prostate volume,UFA and Qmax of each patient were analyzed and the results were compared between two groups.Results The UFA and Qmax of BPH group were much lower than that of the control group [(2.05±0.85)ml/s2 vs.(4.60±1.25)ml/s2 ; (8.50±1.05)ml/s vs.(13.00±3.35)ml/s,P＜0.05].The prostate volume in BPH group was increased compared with control group [(28.6±9.8) ml vs.(24.2±7.6)ml,P＜0.05].As diagnosis standard of UFA＜2.05 ml/s2 and Qmax＜ 10 ml/s,the sensitivity and specificity of UFA and Qmax in diagnosing BOO were (88％,75 ％)vs.(81％,63％).While compared with the result of P-Q chart,the Kappa values in correspondence analysis were 0.55 vs.0.35.The sensitivity,specificity and Kappa value of UFA in diagnosing BOO in BPHs were slightly higher than that of Qmax in comparison with the gold standard (BOO diagnosed by P-Q figure).Conclusions The UFA is a useful urodynamics parameter in diagnosing BOO of BPH.
Maximum-likelihood fits to histograms for improved parameter estimation
Fowler, Joseph W
2013-01-01
Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.
Maximum work extraction and implementation costs for nonequilibrium Maxwell's demons
Sandberg, Henrik; Delvenne, Jean-Charles; Newton, Nigel J.; Mitter, Sanjoy K.
2014-10-01
We determine the maximum amount of work extractable in finite time by a demon performing continuous measurements on a quadratic Hamiltonian system subjected to thermal fluctuations, in terms of the information extracted from the system. The maximum work demon is found to apply a high-gain continuous feedback involving a Kalman-Bucy estimate of the system state and operates in nonequilibrium. A simple and concrete electrical implementation of the feedback protocol is proposed, which allows for analytic expressions of the flows of energy, entropy, and information inside the demon. This let us show that any implementation of the demon must necessarily include an external power source, which we prove both from classical thermodynamics arguments and from a version of Landauer's memory erasure argument extended to nonequilibrium linear systems.
$\\ell_0$-penalized maximum likelihood for sparse directed acyclic graphs
van de Geer, Sara
2012-01-01
We consider the problem of regularized maximum likelihood estimation for the structure and parameters of a high-dimensional, sparse directed acyclic graphical (DAG) model with Gaussian distribution, or equivalently, of a Gaussian structural equation model. We show that the $\\ell_0$-penalized maximum likelihood estimator of a DAG has about the same number of edges as the minimal-edge I-MAP (a DAG with minimal number of edges representing the distribution), and that it converges in Frobenius norm. We allow the number of nodes $p$ to be much larger than sample size $n$ but assume a sparsity condition and that any representation of the true DAG has at least a fixed proportion of its non-zero edge weights above the noise level. Our results do not rely on the restrictive strong faithfulness condition which is required for methods based on conditional independence testing such as the PC-algorithm.
Maximum disturbance review criteria : operational code and guideline
NONE
2003-07-01
This maximum disturbance review criteria (MDRC) is designed to encourage oil and gas construction contractors to reduce environmental impacts and consider land use and water management strategies in their development plans. The MDRC describes the preferred maximum disturbance allowances for the development of wellsites, access routes, right of way for pipelines and other associated facilities such as remote sumps, decking sites, camp sites and borrow pits. The guidelines specify acceptable parameters for typical oil and gas development activities. This report includes operating code tables which describe clearings and setbacks, access roads, watercourses, and photography and assessment reports for oil and gas activity. Additional care is required if special wildlife habitat features are encountered such as nesting sites, mineral licks, bear dens or beaver ponds. 4 tabs.
MaxOcc: a web portal for maximum occurrence analysis.
Bertini, Ivano; Ferella, Lucio; Luchinat, Claudio; Parigi, Giacomo; Petoukhov, Maxim V; Ravera, Enrico; Rosato, Antonio; Svergun, Dmitri I
2012-08-01
The MaxOcc web portal is presented for the characterization of the conformational heterogeneity of two-domain proteins, through the calculation of the Maximum Occurrence that each protein conformation can have in agreement with experimental data. Whatever the real ensemble of conformations sampled by a protein, the weight of any conformation cannot exceed the calculated corresponding Maximum Occurrence value. The present portal allows users to compute these values using any combination of restraints like pseudocontact shifts, paramagnetism-based residual dipolar couplings, paramagnetic relaxation enhancements and small angle X-ray scattering profiles, given the 3D structure of the two domains as input. MaxOcc is embedded within the NMR grid services of the WeNMR project and is available via the WeNMR gateway at http://py-enmr.cerm.unifi.it/access/index/maxocc . It can be used freely upon registration to the grid with a digital certificate.
Structure of Turbulence in Katabatic Flows Below and Above the Wind-Speed Maximum
Grachev, Andrey A.; Leo, Laura S.; Sabatino, Silvana Di; Fernando, Harindra J. S.; Pardyjak, Eric R.; Fairall, Christopher W.
2016-06-01
Measurements of small-scale turbulence made in the atmospheric boundary layer over complex terrain during the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program are used to describe the structure of turbulence in katabatic flows. Turbulent and mean meteorological data were continuously measured on four towers deployed along the east lower slope (2-4°) of Granite Mountain near Salt Lake City in Utah, USA. The multi-level (up to seven) observations made during a 30-day long MATERHORN field campaign in September-October 2012 allowed the study of temporal and spatial structure of katabatic flows in detail, and herein we report turbulence statistics (e.g., fluxes, variances, spectra, and cospectra) and their variations in katabatic flow. Observed vertical profiles show steep gradients near the surface, but in the layer above the slope jet the vertical variability is smaller. It is found that the vertical (normal to the slope) momentum flux and horizontal (along-slope) heat flux in a slope-following coordinate system change their sign below and above the wind maximum of a katabatic flow. The momentum flux is directed downward (upward) whereas the along-slope heat flux is downslope (upslope) below (above) the wind maximum. This suggests that the position of the jet-speed maximum can be obtained by linear interpolation between positive and negative values of the momentum flux (or the along-slope heat flux) to derive the height where the flux becomes zero. It is shown that the standard deviations of all wind-speed components (and therefore of the turbulent kinetic energy) and the dissipation rate of turbulent kinetic energy have a local minimum, whereas the standard deviation of air temperature has an absolute maximum at the height of wind-speed maximum. We report several cases when the destructive effect of vertical heat flux is completely cancelled by the generation of turbulence due to the along-slope heat flux. Turbulence above the wind
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Haseli, Y
2016-05-01
The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Payoff-monotonic game dynamics and the maximum clique problem.
Pelillo, Marcello; Torsello, Andrea
2006-05-01
Evolutionary game-theoretic models and, in particular, the so-called replicator equations have recently proven to be remarkably effective at approximately solving the maximum clique and related problems. The approach is centered around a classic result from graph theory that formulates the maximum clique problem as a standard (continuous) quadratic program and exploits the dynamical properties of these models, which, under a certain symmetry assumption, possess a Lyapunov function. In this letter, we generalize previous work along these lines in several respects. We introduce a wide family of game-dynamic equations known as payoff-monotonic dynamics, of which replicator dynamics are a special instance, and show that they enjoy precisely the same dynamical properties as standard replicator equations. These properties make any member of this family a potential heuristic for solving standard quadratic programs and, in particular, the maximum clique problem. Extensive simulations, performed on random as well as DIMACS benchmark graphs, show that this class contains dynamics that are considerably faster than and at least as accurate as replicator equations. One problem associated with these models, however, relates to their inability to escape from poor local solutions. To overcome this drawback, we focus on a particular subclass of payoff-monotonic dynamics used to model the evolution of behavior via imitation processes and study the stability of their equilibria when a regularization parameter is allowed to take on negative values. A detailed analysis of these properties suggests a whole class of annealed imitation heuristics for the maximum clique problem, which are based on the idea of varying the parameter during the imitation optimization process in a principled way, so as to avoid unwanted inefficient solutions. Experiments show that the proposed annealing procedure does help to avoid poor local optima by initially driving the dynamics toward promising regions in
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Maximum-entropy for the laser fusion problem
Madkour, M.A. [Nansoura Univ. (Egypt). Dept. of Phys.
1996-09-01
The problem of heat flux at the critical surfaces and the surfaces of a pellet of deuterium and tritium (conduction zone) heated by laser have been considered. Ion-electron collisions are only allowed for: i.e. the linear transport equation is used to describe the problem with boundary conditions. The maximum-entropy approach is used to calculate the electron density and temperature across the conduction zone as well as the heat flux. Numerical results are given and compared with those of Rouse and Williams and El-Wakil et al. (orig.).
A Maximum Entropy Modelling of the Rain Drop Size Distribution
Francisco J. Tapiador
2011-01-01
Full Text Available This paper presents a maximum entropy approach to Rain Drop Size Distribution (RDSD modelling. It is shown that this approach allows (1 to use a physically consistent rationale to select a particular probability density function (pdf (2 to provide an alternative method for parameter estimation based on expectations of the population instead of sample moments and (3 to develop a progressive method of modelling by updating the pdf as new empirical information becomes available. The method is illustrated with both synthetic and real RDSD data, the latest coming from a laser disdrometer network specifically designed to measure the spatial variability of the RDSD.
Maximum-Entropy Inference with a Programmable Annealer.
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2016-03-03
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Mitigation of maximum world oil production: Shortage scenarios
Hirsch, Robert L. [Management Information Services, Inc., 723 Fords Landing Way, Alexandria, VA 22314 (United States)
2008-02-15
A framework is developed for planning the mitigation of the oil shortages that will be caused by world oil production reaching a maximum and going into decline. To estimate potential economic impacts, a reasonable relationship between percent decline in world oil supply and percent decline in world GDP was determined to be roughly 1:1. As a limiting case for decline rates, giant fields were examined. Actual oil production from Europe and North America indicated significant periods of relatively flat oil production (plateaus). However, before entering its plateau period, North American oil production went through a sharp peak and steep decline. Examination of a number of future world oil production forecasts showed multi-year rollover/roll-down periods, which represent pseudoplateaus. Consideration of resource nationalism posits an Oil Exporter Withholding Scenario, which could potentially overwhelm all other considerations. Three scenarios for mitigation planning resulted from this analysis: (1) A Best Case, where maximum world oil production is followed by a multi-year plateau before the onset of a monatomic decline rate of 2-5% per year; (2) A Middling Case, where world oil production reaches a maximum, after which it drops into a long-term, 2-5% monotonic annual decline; and finally (3) A Worst Case, where the sharp peak of the Middling Case is degraded by oil exporter withholding, leading to world oil shortages growing potentially more rapidly than 2-5% per year, creating the most dire world economic impacts. (author)
Mroczka Janusz
2014-12-01
Full Text Available Photovoltaic panels have a non-linear current-voltage characteristics to produce the maximum power at only one point called the maximum power point. In the case of the uniform illumination a single solar panel shows only one maximum power, which is also the global maximum power point. In the case an irregularly illuminated photovoltaic panel many local maxima on the power-voltage curve can be observed and only one of them is the global maximum. The proposed algorithm detects whether a solar panel is in the uniform insolation conditions. Then an appropriate strategy of tracking the maximum power point is taken using a decision algorithm. The proposed method is simulated in the environment created by the authors, which allows to stimulate photovoltaic panels in real conditions of lighting, temperature and shading.
Metabolic networks evolve towards states of maximum entropy production.
Unrean, Pornkamol; Srienc, Friedrich
2011-11-01
A metabolic network can be described by a set of elementary modes or pathways representing discrete metabolic states that support cell function. We have recently shown that in the most likely metabolic state the usage probability of individual elementary modes is distributed according to the Boltzmann distribution law while complying with the principle of maximum entropy production. To demonstrate that a metabolic network evolves towards such state we have carried out adaptive evolution experiments with Thermoanaerobacterium saccharolyticum operating with a reduced metabolic functionality based on a reduced set of elementary modes. In such reduced metabolic network metabolic fluxes can be conveniently computed from the measured metabolite secretion pattern. Over a time span of 300 generations the specific growth rate of the strain continuously increased together with a continuous increase in the rate of entropy production. We show that the rate of entropy production asymptotically approaches the maximum entropy production rate predicted from the state when the usage probability of individual elementary modes is distributed according to the Boltzmann distribution. Therefore, the outcome of evolution of a complex biological system can be predicted in highly quantitative terms using basic statistical mechanical principles.
Maximum entropy method for solving operator equations of the first kind
金其年; 侯宗义
1997-01-01
The maximum entropy method for linear ill-posed problems with modeling error and noisy data is considered and the stability and convergence results are obtained. When the maximum entropy solution satisfies the "source condition", suitable rates of convergence can be derived. Considering the practical applications, an a posteriori choice for the regularization parameter is presented. As a byproduct, a characterization of the maximum entropy regularized solution is given.
STUDY ON MAXIMUM SPECIFIC SLUDGE ACIVITY OF DIFFERENT ANAEROBIC GRANULAR SLUDGE BY BATCH TESTS
无
2001-01-01
The maximum specific sludge activity of granular sludge from large-scale UASB, IC and Biobed anaerobic reactors were investigated by batch tests. The limitation factors related to maximum specific sludge activity (diffusion, substrate sort, substrate concentration and granular size) were studied. The general principle and procedure for the precise measurement of maximum specific sludge activity were suggested. The potential capacity of loading rate of the IC and Biobed anaerobic reactors were analyzed and compared by use of the batch tests results.
Estimating Metabolic Fluxes Using a Maximum Network Flexibility Paradigm
Megchelenbrink, W.; Rossell, S.; Huynen, M.A.; Notebaart, R.A.; Marchiori, E.
2015-01-01
MOTIVATION: Genome-scale metabolic networks can be modeled in a constraint-based fashion. Reaction stoichiometry combined with flux capacity constraints determine the space of allowable reaction rates. This space is often large and a central challenge in metabolic modeling is finding the biologicall
Maximum likelihood for genome phylogeny on gene content.
Zhang, Hongmei; Gu, Xun
2004-01-01
With the rapid growth of entire genome data, reconstructing the phylogenetic relationship among different genomes has become a hot topic in comparative genomics. Maximum likelihood approach is one of the various approaches, and has been very successful. However, there is no reported study for any applications in the genome tree-making mainly due to the lack of an analytical form of a probability model and/or the complicated calculation burden. In this paper we studied the mathematical structure of the stochastic model of genome evolution, and then developed a simplified likelihood function for observing a specific phylogenetic pattern under four genome situation using gene content information. We use the maximum likelihood approach to identify phylogenetic trees. Simulation results indicate that the proposed method works well and can identify trees with a high correction rate. Real data application provides satisfied results. The approach developed in this paper can serve as the basis for reconstructing phylogenies of more than four genomes.
Incorporating Linguistic Structure into Maximum Entropy Language Models
FANG GaoLin(方高林); GAO Wen(高文); WANG ZhaoQi(王兆其)
2003-01-01
In statistical language models, how to integrate diverse linguistic knowledge in a general framework for long-distance dependencies is a challenging issue. In this paper, an improved language model incorporating linguistic structure into maximum entropy framework is presented.The proposed model combines trigram with the structure knowledge of base phrase in which trigram is used to capture the local relation between words, while the structure knowledge of base phrase is considered to represent the long-distance relations between syntactical structures. The knowledge of syntax, semantics and vocabulary is integrated into the maximum entropy framework.Experimental results show that the proposed model improves by 24% for language model perplexity and increases about 3% for sign language recognition rate compared with the trigram model.
Maximum mass, moment of inertia and compactness of relativistic stars
Breu, Cosima
2016-01-01
A number of recent works have highlighted that it is possible to express the properties of general-relativistic stellar equilibrium configurations in terms of functions that do not depend on the specific equation of state employed to describe matter at nuclear densities. These functions are normally referred to as "universal relations" and have been found to apply, within limits, both to static or stationary isolated stars, as well as to fully dynamical and merging binary systems. Further extending the idea that universal relations can be valid also away from stability, we show that a universal relation is exhibited also by equilibrium solutions that are not stable. In particular, the mass of rotating configurations on the turning-point line shows a universal behaviour when expressed in terms of the normalised Keplerian angular momentum. In turn, this allows us to compute the maximum mass allowed by uniform rotation, M_{max}, simply in terms of the maximum mass of the nonrotating configuration, M_{TOV}, findi...
40 CFR 73.21 - Phase II repowering allowances.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Phase II repowering allowances. 73.21... (CONTINUED) SULFUR DIOXIDE ALLOWANCE SYSTEM Allowance Allocations § 73.21 Phase II repowering allowances. (a) Repowering allowances. In addition to allowances allocated under § 73.10(b), the Administrator will...
Exchange rate and monetary fundamentals: Long run relationship revisited
Bhanja Niyati
2015-01-01
Full Text Available This study re-examines the long run validity of the monetary approach to exchange rate determination for India. In particular, the long run association of bilateral nominal exchange rate of Indian rupee vis-à-vis USD, Pound-sterling, Yen and Euro against the corresponding monetary fundamentals that the model underlines has been tested using Johansen-Juselius maximum likelihood framework and Gregory-Hansen co-integration approach. Irrespective of the exchange rates the study finds a co-integrating relationship among the variables using Johansen-Juselius maximum likelihood approach. The Gregory-Hansen co-integration method allows for one break determined endogenously in three specifications also confirms the long run relationship. Our results, hence, suggest that the monetary model is a valid theory of long run equilibrium condition for the rupee-dollar, rupee-pound, rupee-yen and rupee-euro exchange rates.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum intelligible range of the human voice
Boren, Braxton
This dissertation examines the acoustics of the spoken voice at high levels and the maximum number of people that could hear such a voice unamplified in the open air. In particular, it examines an early auditory experiment by Benjamin Franklin which sought to determine the maximum intelligible crowd for the Anglican preacher George Whitefield in the eighteenth century. Using Franklin's description of the experiment and a noise source on Front Street, the geometry and diffraction effects of such a noise source are examined to more precisely pinpoint Franklin's position when Whitefield's voice ceased to be intelligible. Based on historical maps, drawings, and prints, the geometry and material of Market Street is constructed as a computer model which is then used to construct an acoustic cone tracing model. Based on minimal values of the Speech Transmission Index (STI) at Franklin's position, Whitefield's on-axis Sound Pressure Level (SPL) at 1 m is determined, leading to estimates centering around 90 dBA. Recordings are carried out on trained actors and singers to determine their maximum time-averaged SPL at 1 m. This suggests that the greatest average SPL achievable by the human voice is 90-91 dBA, similar to the median estimates for Whitefield's voice. The sites of Whitefield's largest crowds are acoustically modeled based on historical evidence and maps. Based on Whitefield's SPL, the minimal STI value, and the crowd's background noise, this allows a prediction of the minimally intelligible area for each site. These yield maximum crowd estimates of 50,000 under ideal conditions, while crowds of 20,000 to 30,000 seem more reasonable when the crowd was reasonably quiet and Whitefield's voice was near 90 dBA.
Guillemot, Sylvain
2008-01-01
Given a set of leaf-labeled trees with identical leaf sets, the well-known "Maximum Agreement SubTree" problem (MAST) consists of finding a subtree homeomorphically included in all input trees and with the largest number of leaves. Its variant called "Maximum Compatible Tree" (MCT) is less stringent, as it allows the input trees to be refined. Both problems are of particular interest in computational biology, where trees encountered have often small degrees. In this paper, we study the parameterized complexity of MAST and MCT with respect to the maximum degree, denoted by D, of the input trees. It is known that MAST is polynomial for bounded D. As a counterpart, we show that the problem is W[1]-hard with respect to parameter D. Moreover, elying on recent advances in parameterized complexity we obtain a tight lower bound: while MAST can be solved in O(N^{O(D)}) time where N denotes the input length, we show that an O(N^{o(D)}) bound is not achievable, unless SNP is contained in SE. We also show that MCT is W[1...
Enzyme kinetics and the maximum entropy production principle.
Dobovišek, Andrej; Zupanović, Paško; Brumen, Milan; Bonačić-Lošić, Zeljana; Kuić, Domagoj; Juretić, Davor
2011-03-01
A general proof is derived that entropy production can be maximized with respect to rate constants in any enzymatic transition. This result is used to test the assumption that biological evolution of enzyme is accompanied with an increase of entropy production in its internal transitions and that such increase can serve to quantify the progress of enzyme evolution. The state of maximum entropy production would correspond to fully evolved enzyme. As an example the internal transition ES↔EP in a generalized reversible Michaelis-Menten three state scheme is analyzed. A good agreement is found among experimentally determined values of the forward rate constant in internal transitions ES→EP for three types of β-Lactamase enzymes and their optimal values predicted by the maximum entropy production principle, which agrees with earlier observations that β-Lactamase enzymes are nearly fully evolved. The optimization of rate constants as the consequence of basic physical principle, which is the subject of this paper, is a completely different concept from a) net metabolic flux maximization or b) entropy production minimization (in the static head state), both also proposed to be tightly connected to biological evolution.
Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm
S. Radhika
2016-04-01
Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.
Improving predictability of time series using maximum entropy methods
Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.
2015-04-01
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
32 CFR Appendix A to Part 110 - Climatic Zones Used To Determine Rates of Commutation Allowance
2010-07-01
... COMMUTATION INSTEAD OF UNIFORMS FOR MEMBERS OF THE SENIOR RESERVE OFFICERS' TRAINING CORPS Pt. 110, App. A.... Louisiana 11. Mississippi 12. New Mexico, only 100 mile-wide belt along south border 13. North Carolina 14.... Montana 22. Nebraska 23. Nevada 24. New Hampshire 25. New Jersey 26. New Mexico, except a 100 mile-wide...
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
45 CFR 2522.245 - How are living allowances disbursed?
2010-10-01
..., Requirements, and Benefits § 2522.245 How are living allowances disbursed? A living allowance is not a wage and programs may not pay living allowances on an hourly basis. Programs must distribute the living allowance at... 45 Public Welfare 4 2010-10-01 2010-10-01 false How are living allowances disbursed? 2522.245...
Erich Regener and the maximum in ionisation of the atmosphere
Carlson, P
2014-01-01
In the 1930s the German physicist Erich Regener (1881-1955) did important work on the measurement of the rate of production of ionisation deep under-water and in the atmosphere. He discovered, along with one of his students, Georg Pfotzer, the altitude at which the production of ionisation in the atmosphere reaches a maximum, often, but misleadingly, called the Pfotzer maximum. Regener was one of the first to estimate the energy density of cosmic rays, an estimate that was used by Baade and Zwicky to bolster their postulate that supernovae might be their source. Yet Regener's name is less recognised by present-day cosmic ray physicists than it should be largely because in 1937 he was forced to take early retirement by the National Socialists as his wife had Jewish ancestors. In this paper we briefly review his work on cosmic rays and recommend an alternative naming of the ionisation maximum. The influence that Regener had on the field through his son, his son-in-law, his grandsons and his students and through...
A method to predict amplitude and date of maximum sunspot number
无
2000-01-01
A method to predict the amplitude and date of the maximum sunspot number is introduced. The regression analysis of the relationship between the variation rate of monthly sunspot numbers in the initial stage of solar cycles and both of the maximum and the time-length of ascending period of the cycle showed that they are closely correlative. In general, the maximum will be larger and the ascending period will be shorter when the rate is larger. The rate of sunspot numbers in the initial 2 years of the 23rd cycle is thus analyzed based on these grounds and the maximum of the cycle is predicted. For the smoothed monthly sunspot numbers, the maximum will be about 139.2±18.8 and the time-length of ascending period will be about 3.31±0.42 years, that is to say, the maximum will appear around the spring of the year 2000. For the mean monthly ones, the maximum will be near 170.1±22.9 and the time-length of ascending period will be about 3.42±0.46 years, that is to say, the appearing date of the maximum will be later.
42 CFR 495.308 - Net average allowable costs as the basis for determining the incentive payment.
2010-10-01
... 42 Public Health 5 2010-10-01 2010-10-01 false Net average allowable costs as the basis for... Net average allowable costs as the basis for determining the incentive payment. (a) The first year of..., implementation or upgrade of certified electronic health records technology. (2) The maximum net...
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
An efficient approximation algorithm for finding a maximum clique using Hopfield network learning.
Wang, Rong Long; Tang, Zheng; Cao, Qi Ping
2003-07-01
In this article, we present a solution to the maximum clique problem using a gradient-ascent learning algorithm of the Hopfield neural network. This method provides a near-optimum parallel algorithm for finding a maximum clique. To do this, we use the Hopfield neural network to generate a near-maximum clique and then modify weights in a gradient-ascent direction to allow the network to escape from the state of near-maximum clique to maximum clique or better. The proposed parallel algorithm is tested on two types of random graphs and some benchmark graphs from the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS). The simulation results show that the proposed learning algorithm can find good solutions in reasonable computation time.
Neuromuscular determinants of maximum walking speed in well-functioning older adults.
Clark, David J; Manini, Todd M; Fielding, Roger A; Patten, Carolynn
2013-03-01
Maximum walking speed may offer an advantage over usual walking speed for clinical assessment of age-related declines in mobility function that are due to neuromuscular impairment. The objective of this study was to determine the extent to which maximum walking speed is affected by neuromuscular function of the lower extremities in older adults. We recruited two groups of healthy, well functioning older adults who differed primarily on maximum walking speed. We hypothesized that individuals with slower maximum walking speed would exhibit reduced lower extremity muscle size and impaired plantarflexion force production and neuromuscular activation during a rapid contraction of the triceps surae muscle group (soleus (SO) and gastrocnemius (MG)). All participants were required to have usual 10-meter walking speed of >1.0m/s. If the difference between usual and maximum 10m walking speed was 0.6m/s, the individual was assigned to the "Faster" group (n=12). Peak rate of force development (RFD) and rate of neuromuscular activation (rate of EMG rise) of the triceps surae muscle group were assessed during a rapid plantarflexion movement. Muscle cross sectional area of the right triceps surae, quadriceps and hamstrings muscle groups was determined by magnetic resonance imaging. Across participants, the difference between usual and maximal walking speed was predominantly dictated by maximum walking speed (r=.85). We therefore report maximum walking speed (1.76 and 2.17m/s in Slower and Faster, ptriceps surae (p=.44), quadriceps (p=.76) and hamstrings (p=.98). MG rate of EMG rise was positively associated with RFD and maximum 10m walking speed, but not the usual 10m walking speed. These findings support the conclusion that maximum walking speed is limited by impaired neuromuscular force and activation of the triceps surae muscle group. Future research should further evaluate the utility of maximum walking speed for use in clinical assessment to detect and monitor age
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Probable Maximum Earthquake Magnitudes for the Cascadia Subduction
Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.
2013-12-01
The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc
Maximum principle and convergence of central schemes based on slope limiters
Mehmetoglu, Orhan
2012-01-01
A maximum principle and convergence of second order central schemes is proven for scalar conservation laws in dimension one. It is well known that to establish a maximum principle a nonlinear piecewise linear reconstruction is needed and a typical choice is the minmod limiter. Unfortunately, this implies that the scheme uses a first order reconstruction at local extrema. The novelty here is that we allow local nonlinear reconstructions which do not reduce to first order at local extrema and still prove maximum principle and convergence. © 2011 American Mathematical Society.
Effective soil hydraulic conductivity predicted with the maximum power principle
Westhoff, Martijn; Erpicum, Sébastien; Archambeau, Pierre; Pirotton, Michel; Zehe, Erwin; Dewals, Benjamin
2016-04-01
Drainage of water in soils happens for a large extent through preferential flowpaths, but these subsurface flowpaths are extremely difficult to observe or parameterize in hydrological models. To potentially overcome this problem, thermodynamic optimality principles have been suggested to predict effective parametrization of these (sub-grid) structures, such as the maximum entropy production principle or the equivalent maximum power principle. These principles have been successfully applied to predict heat transfer from the Equator to the Poles, or turbulent heat fluxes between the surface and the atmosphere. In these examples, the effective flux adapts itself to its boundary condition by adapting its effective conductance through the creation of e.g. convection cells. However, flow through porous media, such as soils, can only quickly adapt its effective flow conductance by creation of preferential flowpaths, but it is unknown if this is guided by the aim to create maximum power. Here we show experimentally that this is indeed the case: In the lab, we created a hydrological analogue to the atmospheric model dealing with heat transport between Equator and poles. The experimental setup consists of two freely draining reservoirs connected with each other by a confined aquifer. By adding water to only one reservoir, a potential difference will build up until a steady state is reached. From the steady state potential difference and the observed flow through the aquifer, and effective hydraulic conductance can be determined. This observed conductance does correspond to the one maximizing power of the flux through the confined aquifer. Although this experiment is done in an idealized setting, it opens doors for better parameterizing hydrological models. Furthermore, it shows that hydraulic properties of soils are not static, but they change with changing boundary conditions. A potential limitation to the principle is that it only applies to steady state conditions
Promoter recognition based on the maximum entropy hidden Markov model.
Zhao, Xiao-yu; Zhang, Jin; Chen, Yuan-yuan; Li, Qiang; Yang, Tao; Pian, Cong; Zhang, Liang-yun
2014-08-01
Since the fast development of genome sequencing has produced large scale data, the current work uses the bioinformatics methods to recognize different gene regions, such as exon, intron and promoter, which play an important role in gene regulations. In this paper, we introduce a new method based on the maximum entropy Markov model (MEMM) to recognize the promoter, which utilizes the biological features of the promoter for the condition. However, it leads to a high false positive rate (FPR). In order to reduce the FPR, we provide another new method based on the maximum entropy hidden Markov model (ME-HMM) without the independence assumption, which could also accommodate the biological features effectively. To demonstrate the precision, the new methods are implemented by R language and the hidden Markov model (HMM) is introduced for comparison. The experimental results show that the new methods may not only overcome the shortcomings of HMM, but also have their own advantages. The results indicate that, MEMM is excellent for identifying the conserved signals, and ME-HMM can demonstrably improve the true positive rate.
Rotating proto-neutron stars: spin evolution, maximum mass and I-Love-Q relations
Martinon, Grégoire; Gualtieri, Leonardo; Ferrari, Valeria
2014-01-01
Shortly after its birth in a gravitational collapse, a proto-neutron star enters in a phase of quasi-stationary evolution characterized by large gradients of the thermodynamical variables and intense neutrino emission. In few tens of seconds the gradients smooth out while the star contracts and cools down, until it becomes a neutron star. In this paper we study this phase of the proto-neutron star life including rotation, and employing finite temperature equations of state. We model the evolution of the rotation rate, and determine the relevant quantities characterizing the star. Our results show that an isolated neutron star cannot reach, at the end of the evolution, the maximum values of mass and rotation rate allowed by the zero-temperature equation of state. Moreover, a mature neutron star evolved in isolation cannot rotate too rapidly, even if it is born from a proto-neutron star rotating at the mass-shedding limit. We also show that the I-Love-Q relations are violated in the first second of life, but th...
Maximum tunneling velocities in symmetric double well potentials
Manz, Jörn; Schmidt, Burkhard; Yang, Yonggang
2014-01-01
We consider coherent tunneling of one-dimensional model systems in non-cyclic or cyclic symmetric double well potentials. Generic potentials are constructed which allow for analytical estimates of the quantum dynamics in the non-relativistic deep tunneling regime, in terms of the tunneling distance, barrier height and mass (or moment of inertia). For cyclic systems, the results may be scaled to agree well with periodic potentials for which semi-analytical results in terms of Mathieu functions exist. Starting from a wavepacket which is initially localized in one of the potential wells, the subsequent periodic tunneling is associated with tunneling velocities. These velocities (or angular velocities) are evaluated as the ratio of the flux densities versus the probability densities. The maximum velocities are found under the top of the barrier where they scale as the square root of the ratio of barrier height and mass (or moment of inertia), independent of the tunneling distance. They are applied exemplarily to ...
Vector control structure of an asynchronous motor at maximum torque
Chioncel, C. P.; Tirian, G. O.; Gillich, N.; Raduca, E.
2016-02-01
Vector control methods offer the possibility to gain high performance, being widely used. Certain applications require an optimum control in limit operating conditions, as, at maximum torque, that is not always satisfied. The paper presents how the voltage and the frequency for an asynchronous machine (ASM) operating at variable speed are determinate, with an accent on the method that keeps the rotor flux constant. The simulation analyses consider three load types: variable torque and speed, variable torque and constant speed, constant torque and variable speed. The final values of frequency and voltage are obtained through the proposed control schemes with one controller using the simulation language based on the Maple module. The dynamic analysis of the system is done for the case with P and PI controller and allows conclusions on the proposed method, which can have different applications, as the ASM in wind turbines.
Network Decomposition and Maximum Independent Set Part Ⅰ: Theoretic Basis
朱松年; 朱嫱
2003-01-01
The structure and characteristics of a connected network are analyzed, and a special kind of sub-network, which can optimize the iteration processes, is discovered. Then, the sufficient and necessary conditions for obtaining the maximum independent set are deduced. It is found that the neighborhood of this sub-network possesses the similar characters, but both can never be allowed incorporated together. Particularly, it is identified that the network can be divided into two parts by a certain style, and then both of them can be transformed into a pair sets network, where the special sub-networks and their neighborhoods appear alternately distributed throughout the entire pair sets network. By use of this characteristic, the network decomposed enough without losing any solutions is obtained. All of these above will be able to make well ready for developing a much better algorithm with polynomial time bound for an odd network in the the application research part of this subject.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
A maximum entropy model for opinions in social groups
Davis, Sergio; Navarrete, Yasmín; Gutiérrez, Gonzalo
2014-04-01
We study how the opinions of a group of individuals determine their spatial distribution and connectivity, through an agent-based model. The interaction between agents is described by a Hamiltonian in which agents are allowed to move freely without an underlying lattice (the average network topology connecting them is determined from the parameters). This kind of model was derived using maximum entropy statistical inference under fixed expectation values of certain probabilities that (we propose) are relevant to social organization. Control parameters emerge as Lagrange multipliers of the maximum entropy problem, and they can be associated with the level of consequence between the personal beliefs and external opinions, and the tendency to socialize with peers of similar or opposing views. These parameters define a phase diagram for the social system, which we studied using Monte Carlo Metropolis simulations. Our model presents both first and second-order phase transitions, depending on the ratio between the internal consequence and the interaction with others. We have found a critical value for the level of internal consequence, below which the personal beliefs of the agents seem to be irrelevant.
Distribution of phytoplankton groups within the deep chlorophyll maximum
Latasa, Mikel
2016-11-01
The fine vertical distribution of phytoplankton groups within the deep chlorophyll maximum (DCM) was studied in the NE Atlantic during summer stratification. A simple but unconventional sampling strategy allowed examining the vertical structure with ca. 2 m resolution. The distribution of Prochlorococcus, Synechococcus, chlorophytes, pelagophytes, small prymnesiophytes, coccolithophores, diatoms, and dinoflagellates was investigated with a combination of pigment-markers, flow cytometry and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer. The more symmetrical distribution of chlorophyll than cells around the DCM peak was due to the increase of pigment per cell with depth. We found a vertical alignment of phytoplankton groups within the DCM layer indicating preferences for different ecological niches in a layer with strong gradients of light and nutrients. Prochlorococcus occupied the shallowest and diatoms the deepest layers. Dinoflagellates, Synechococcus and small prymnesiophytes preferred shallow DCM layers, and coccolithophores, chlorophytes and pelagophytes showed a preference for deep layers. Cell size within groups changed with depth in a pattern related to their mean size: the cell volume of the smallest group increased the most with depth while the cell volume of the largest group decreased the most. The vertical alignment of phytoplankton groups confirms that the DCM is not a homogeneous entity and indicates groups’ preferences for different ecological niches within this layer.
METHOD FOR DETERMINING THE MAXIMUM ARRANGEMENT FACTOR OF FOOTWEAR PARTS
DRIŞCU Mariana
2014-05-01
Full Text Available By classic methodology, designing footwear is a very complex and laborious activity. That is because classic methodology requires many graphic executions using manual means, which consume a lot of the producer’s time. Moreover, the results of this classical methodology may contain many inaccuracies with the most unpleasant consequences for the footwear producer. Thus, the costumer that buys a footwear product by taking into consideration the characteristics written on the product (size, width can notice after a period that the product has flaws because of the inadequate design. In order to avoid this kind of situations, the strictest scientific criteria must be followed when one designs a footwear product. The decisive step in this way has been made some time ago, when, as a result of powerful technical development and massive implementation of electronical calculus systems and informatics, This paper presents a product software for determining all possible arrangements of a footwear product’s reference points, in order to automatically acquire the maximum arrangement factor. The user multiplies the pattern in order to find the economic arrangement for the reference points. In this purpose, the user must probe few arrangement variants, in the translation and rotate-translation system. The same process is used in establishing the arrangement factor for the two points of reference of the designed footwear product. After probing several variants of arrangement in the translation and rotation and translation systems, the maximum arrangement factors are chosen. This allows the user to estimate the material wastes.
MaxOcc: a web portal for maximum occurrence analysis
Bertini, Ivano, E-mail: ivanobertini@cerm.unifi.it; Ferella, Lucio; Luchinat, Claudio, E-mail: luchinat@cerm.unifi.it; Parigi, Giacomo [Magnetic Resonance Center (CERM), University of Florence (Italy); Petoukhov, Maxim V. [EMBL, Hamburg Outstation (Germany); Ravera, Enrico; Rosato, Antonio [Magnetic Resonance Center (CERM), University of Florence (Italy); Svergun, Dmitri I. [EMBL, Hamburg Outstation (Germany)
2012-08-15
The MaxOcc web portal is presented for the characterization of the conformational heterogeneity of two-domain proteins, through the calculation of the Maximum Occurrence that each protein conformation can have in agreement with experimental data. Whatever the real ensemble of conformations sampled by a protein, the weight of any conformation cannot exceed the calculated corresponding Maximum Occurrence value. The present portal allows users to compute these values using any combination of restraints like pseudocontact shifts, paramagnetism-based residual dipolar couplings, paramagnetic relaxation enhancements and small angle X-ray scattering profiles, given the 3D structure of the two domains as input. MaxOcc is embedded within the NMR grid services of the WeNMR project and is available via the WeNMR gateway at http://py-enmr.cerm.unifi.it/access/index/maxocchttp://py-enmr.cerm.unifi.it/access/index/maxocc. It can be used freely upon registration to the grid with a digital certificate.
Superfast maximum-likelihood reconstruction for quantum tomography
Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon
2017-06-01
Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.
IDENTIFICATION OF IDEOTYPES BY CANONICAL ANALYSIS IN Panicum maximum
Janaina Azevedo Martuscello
2015-04-01
Full Text Available Grouping of genotypes by canonical variable analysis is an important tool in breeding. It allows the grouping of individuals with similar characteristics that are associated with superior agronomic performance and may indicate the ideal profile of a plant for the region. The objective of the present study was to define, by canonical analysis, the agronomic profile of Panicum maximum plants adapted to the Agreste region. The experiment was conducted in a completely randomized design with 28 treatments, 22 genotypes of Panicum maximum, and cultivars Mombasa, Tanzania, Massai, Milenio, BRS Zuri, and BRS Tamani in triplicate in 4-m² plots. Plots were harvested five times and the following traits were evaluated: plant height; total, leaf, and stem; dead dry matter yields; leaf:stem ratio; leaf percentage; and volumetric density of forage. The analysis of canonical variables was performed based on the phenotypic means of the evaluated traits and on the residual variance and covariance matrix. Genotype PM34 showed higher mean leaf dry matter yield under the conditions of the Agreste of Alagoas (on average 53% higher than cultivars Mombasa, Tanzania, Milenio and Massai. It was possible to summarize the variation observed in eight agronomic characteristics in only two canonical variables accounting for 81.44 % of the data variation. The ideotype plant adapted to the conditions of the Agreste should be tall and present high leaf yield, leaf percentage, and leaf:stem ratio, and intermediate values of volumetric density of forage.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
7 CFR 3560.202 - Establishing rents and utility allowances.
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Establishing rents and utility allowances. 3560.202... Establishing rents and utility allowances. (a) General. Rents and utility allowances for rental units in Agency... Agency. (b) Agency approval. All rents and utility allowances set by borrowers are subject to...
The Change in the Maximum Wind Speed and the Impact of it on Agricultural Production
WU Jian-mei; SUN Jin-sen; SUI Gui-ling; XIE Su-he; WANG Meng
2012-01-01
Using the data on the maximum wind speed within ten minutes every month in the period 1971-2009 in Zhucheng City of Shandong Province, we conduct statistical analysis of the maximum wind speed in Zhucheng City. The results show that over thirty-nine years, the annual maximum wind speed in four seasons in Zhucheng City tends to decline. The annual maximum wind speed declines at the rate of 1.45 m/s every 10 years. It falls fastest in winter, with decline rate of 1.73 m/s every 10 years; it is close to the average annual maximum wind speed in spring and autumn, with decline rate of 1.44 m/s and 14.8 m/s every 10 years, respectively; it falls slowest in summer, and the extreme value of the maximum wind speed occurs mainly in spring. The curve of changes in the monthly maximum wind speed in Zhucheng City assumes diminishing shape of "two peaks and one trough". We conduct preliminary analysis of the windy weather situation, and put forth specific defensive measures against the hazards of strong winds in the different periods.
Maximum tunneling velocities in symmetric double well potentials
Manz, Jörn [State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Laser Spectroscopy, Shanxi University, 92, Wucheng Road, Taiyuan 030006 (China); Institut für Chemie und Biochemie, Freie Universität Berlin, Takustr. 3, 14195 Berlin (Germany); Schild, Axel [Institut für Chemie und Biochemie, Freie Universität Berlin, Takustr. 3, 14195 Berlin (Germany); Schmidt, Burkhard, E-mail: burkhard.schmidt@fu-berlin.de [Institut für Mathematik, Freie Universität Berlin, Arnimallee 6, 14195 Berlin (Germany); Yang, Yonggang, E-mail: ygyang@sxu.edu.cn [State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Laser Spectroscopy, Shanxi University, 92, Wucheng Road, Taiyuan 030006 (China)
2014-10-17
Highlights: • Coherent tunneling in one-dimensional symmetric double well potentials. • Potentials for analytical estimates in the deep tunneling regime. • Maximum velocities scale as the square root of the ratio of barrier height and mass. • In chemical physics maximum tunneling velocities are in the order of a few km/s. - Abstract: We consider coherent tunneling of one-dimensional model systems in non-cyclic or cyclic symmetric double well potentials. Generic potentials are constructed which allow for analytical estimates of the quantum dynamics in the non-relativistic deep tunneling regime, in terms of the tunneling distance, barrier height and mass (or moment of inertia). For cyclic systems, the results may be scaled to agree well with periodic potentials for which semi-analytical results in terms of Mathieu functions exist. Starting from a wavepacket which is initially localized in one of the potential wells, the subsequent periodic tunneling is associated with tunneling velocities. These velocities (or angular velocities) are evaluated as the ratio of the flux densities versus the probability densities. The maximum velocities are found under the top of the barrier where they scale as the square root of the ratio of barrier height and mass (or moment of inertia), independent of the tunneling distance. They are applied exemplarily to several prototypical molecular models of non-cyclic and cyclic tunneling, including ammonia inversion, Cope rearrangement of semibullvalene, torsions of molecular fragments, and rotational tunneling in strong laser fields. Typical maximum velocities and angular velocities are in the order of a few km/s and from 10 to 100 THz for our non-cyclic and cyclic systems, respectively, much faster than time-averaged velocities. Even for the more extreme case of an electron tunneling through a barrier of height of one Hartree, the velocity is only about one percent of the speed of light. Estimates of the corresponding time scales for
Efficiency at maximum power of a discrete feedback ratchet
Jarillo, Javier; Tangarife, Tomás; Cao, Francisco J.
2016-01-01
Efficiency at maximum power is found to be of the same order for a feedback ratchet and for its open-loop counterpart. However, feedback increases the output power up to a factor of five. This increase in output power is due to the increase in energy input and the effective entropy reduction obtained as a consequence of feedback. Optimal efficiency at maximum power is reached for time intervals between feedback actions two orders of magnitude smaller than the characteristic time of diffusion over a ratchet period length. The efficiency is computed consistently taking into account the correlation between the control actions. We consider a feedback control protocol for a discrete feedback flashing ratchet, which works against an external load. We maximize the power output optimizing the parameters of the ratchet, the controller, and the external load. The maximum power output is found to be upper bounded, so the attainable extracted power is limited. After, we compute an upper bound for the efficiency of this isothermal feedback ratchet at maximum power output. We make this computation applying recent developments of the thermodynamics of feedback-controlled systems, which give an equation to compute the entropy reduction due to information. However, this equation requires the computation of the probability of each of the possible sequences of the controller's actions. This computation becomes involved when the sequence of the controller's actions is non-Markovian, as is the case in most feedback ratchets. We here introduce an alternative procedure to set strong bounds to the entropy reduction in order to compute its value. In this procedure the bounds are evaluated in a quasi-Markovian limit, which emerge when there are big differences between the stationary probabilities of the system states. These big differences are an effect of the potential strength, which minimizes the departures from the Markovianicity of the sequence of control actions, allowing also to
Speech processing using maximum likelihood continuity mapping
Hogden, John E. (Santa Fe, NM)
2000-01-01
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Speech processing using maximum likelihood continuity mapping
Hogden, J.E.
2000-04-18
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Maximum-power-point tracking control of solar heating system
Huang, Bin-Juine
2012-11-01
The present study developed a maximum-power point tracking control (MPPT) technology for solar heating system to minimize the pumping power consumption at an optimal heat collection. The net solar energy gain Q net (=Q s-W p/η e) was experimentally found to be the cost function for MPPT with maximum point. The feedback tracking control system was developed to track the optimal Q net (denoted Q max). A tracking filter which was derived from the thermal analytical model of the solar heating system was used to determine the instantaneous tracking target Q max(t). The system transfer-function model of solar heating system was also derived experimentally using a step response test and used in the design of tracking feedback control system. The PI controller was designed for a tracking target Q max(t) with a quadratic time function. The MPPT control system was implemented using a microprocessor-based controller and the test results show good tracking performance with small tracking errors. It is seen that the average mass flow rate for the specific test periods in five different days is between 18.1 and 22.9kg/min with average pumping power between 77 and 140W, which is greatly reduced as compared to the standard flow rate at 31kg/min and pumping power 450W which is based on the flow rate 0.02kg/sm 2 defined in the ANSI/ASHRAE 93-1986 Standard and the total collector area 25.9m 2. The average net solar heat collected Q net is between 8.62 and 14.1kW depending on weather condition. The MPPT control of solar heating system has been verified to be able to minimize the pumping energy consumption with optimal solar heat collection. © 2012 Elsevier Ltd.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Maximum SINR Synchronization Strategies in Multiuser Filter Bank Schemes
Pecile Francesco
2010-01-01
Full Text Available We consider synchronization in a multiuser filter bank uplink system with single-user detection. Perfect user synchronization is not the optimal choice as the intuition would suggest. To maximize performance the synchronization parameters have to be chosen to maximize the signal-to-interference-plus-noise ratio (SINR at each equalizer subchannel output. However, the resulting filter bank receiver structure becomes complex. Therefore, we consider two simplified synchronization metrics that are based on the maximization of the average SINR of a given user or the aggregate SINR of all users. Furthermore, a relaxation of the aggregate SINR metric allows implementing an efficient multiuser analysis filter bank. This receiver deploys two fractionally spaced analysis stages. Each analysis stage is efficiently implemented via a polyphase filter bank, followed by an extended discrete Fourier transform that allows the user frequency offsets to be partly compensated. Then, sub-channel maximum SINR equalization is used. We discuss the application of the proposed solution to Orthogonal Frequency Division Multiple Access (OFDMA and multiuser Filtered Multitone (FMT systems.
tmle : An R Package for Targeted Maximum Likelihood Estimation
Susan Gruber
2012-11-01
Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
77 FR 76169 - Increase in Maximum Tuition and Fee Amounts Payable under the Post-9/11 GI Bill
2012-12-26
... AFFAIRS Increase in Maximum Tuition and Fee Amounts Payable under the Post-9/11 GI Bill AGENCY: Department... of the increase in the Post-9/11 GI Bill maximum tuition and fee amounts payable and the increase in.... SUPPLEMENTARY INFORMATION: For the 2011-2012 academic year, the Post-9/ 11 GI Bill allowed VA to pay the actual...
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Maximum entropy production in environmental and ecological systems.
Kleidon, Axel; Malhi, Yadvinder; Cox, Peter M
2010-05-12
The coupled biosphere-atmosphere system entails a vast range of processes at different scales, from ecosystem exchange fluxes of energy, water and carbon to the processes that drive global biogeochemical cycles, atmospheric composition and, ultimately, the planetary energy balance. These processes are generally complex with numerous interactions and feedbacks, and they are irreversible in their nature, thereby producing entropy. The proposed principle of maximum entropy production (MEP), based on statistical mechanics and information theory, states that thermodynamic processes far from thermodynamic equilibrium will adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate. This issue focuses on the latest development of applications of MEP to the biosphere-atmosphere system including aspects of the atmospheric circulation, the role of clouds, hydrology, vegetation effects, ecosystem exchange of energy and mass, biogeochemical interactions and the Gaia hypothesis. The examples shown in this special issue demonstrate the potential of MEP to contribute to improved understanding and modelling of the biosphere and the wider Earth system, and also explore limitations and constraints to the application of the MEP principle.
Walter Grote H
2007-12-01
Full Text Available En este trabajo se presenta un análisis simple, pero efectivo, de desempeño (throughput de redes de área local inalámbricas (WLAN: Wireless Local Area Networks operando bajo la familia de estándares IEEE 802.11 a/b/g. El análisis considera la influencia del tamaño de los paquetes, parámetros de configuración de los dispositivos (uso del protocolo RTS/CTS o acceso básico, tamaño de la ventana de contienda inicial, tasas de transmisión como el efecto del número de dispositivos que conforman la red. El objetivo de este estudio es que los administradores de este tipo de redes puedan estimar los efectos que tiene sobre el desempeño la configuración de los dispositivos de la red. Los resultados obtenidos mediante este análisis constituyen una cota máxima del desempeño de la red, ya que, con el objetivo de evitar un análisis estocástico, se asume ausencia de colisiones e interferencia electromagnética en el canalIn this publication a simple, but effective, throughput analysis for WLANs (Wireless Local Area Networks operating according to the IEEE 802.11 a/b/g family of standards is presented. The influence of data packet sizes, device configuration parameters (RTS/CTS or basic access protocol, initial contention window, transmission rate and the effect of the number of devices of the network are considered. The purpose of this study is to provide network administrators an insight of how network configuration parameters affect network performance. Results of this kind of analysis will provide an upper bound on network performance, since they do not consider the effect of collisions and electromagnetic interference
27 CFR 20.24 - Allowance of claims.
2010-04-01
... OF THE TREASURY LIQUORS DISTRIBUTION AND USE OF DENATURED ALCOHOL AND RUM Administrative Provisions Authorities § 20.24 Allowance of claims. The appropriate TTB officer is authorized to allow claims for...
27 CFR 22.23 - Allowance of claims.
2010-04-01
... OF THE TREASURY LIQUORS DISTRIBUTION AND USE OF TAX-FREE ALCOHOL Administrative Provisions Authorities § 22.23 Allowance of claims. The appropriate TTB officer is authorized to allow claims for...
Y. Haseli
2016-05-01
Full Text Available The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov’s engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
2010-10-01
... low-income children in families with income from 101 to 150 percent of the FPL. 457.555 Section 457... low-income children in families with income from 101 to 150 percent of the FPL. (a) Non-institutional services. For targeted low-income children whose family income is from 101 to 150 percent of the FPL,...
胡吉磊; 徐建国; 姚耀明; 宋刚; 吴欣
2011-01-01
铝包钢绞线张力大、弧垂小,在输电线路大跨越设计中具有很好的应用前景,但是因为输送容量偏小,限制了其使用范围.通过试验验证,将铝包钢绞线的最高允许运行温度提高至130 0C,从而极大地增加了其输送能力,为今后大跨越线路设计提供了新的思路.
Perry, J. L.
2016-01-01
As the Space Station Freedom program transitioned to become the International Space Station (ISS), uncertainty existed concerning the performance capabilities for U.S.- and Russian-provided trace contaminant control (TCC) equipment. In preparation for the first dialogue between NASA and Russian Space Agency personnel in Moscow, Russia, in late April 1994, an engineering analysis was conducted to serve as a basis for discussing TCC equipment engineering assumptions as well as relevant assumptions on equipment offgassing and cabin air quality standards. The analysis presented was conducted as part of the efforts to integrate Russia into the ISS program via the early ISS Multilateral Medical Operations Panel's Air Quality Subgroup deliberations. This analysis, served as a basis for technical deliberations that established a framework for TCC system design and operations among the ISS program's international partners that has been instrumental in successfully managing the ISS common cabin environment.
The effect of natural selection on the performance of maximum parsimony
Ofria Charles
2007-06-01
Full Text Available Abstract Background Maximum parsimony is one of the most commonly used and extensively studied phylogeny reconstruction methods. While current evaluation methodologies such as computer simulations provide insight into how well maximum parsimony reconstructs phylogenies, they tell us little about how well maximum parsimony performs on taxa drawn from populations of organisms that evolved subject to natural selection in addition to the random factors of drift and mutation. It is clear that natural selection has a significant impact on Among Site Rate Variation (ASRV and the rate of accepted substitutions; that is, accepted mutations do not occur with uniform probability along the genome and some substitutions are more likely to occur than other substitutions. However, little is know about how ASRV and non-uniform character substitutions impact the performance of reconstruction methods such as maximum parsimony. To gain insight into these issues, we study how well maximum parsimony performs with data generated by Avida, a digital life platform where populations of digital organisms evolve subject to natural selective pressures. Results We first identify conditions where natural selection does affect maximum parsimony's reconstruction accuracy. In general, as we increase the probability that a significant adaptation will occur in an intermediate ancestor, the performance of maximum parsimony improves. In fact, maximum parsimony can correctly reconstruct small 4 taxa trees on data that have received surprisingly many mutations if the intermediate ancestor has received a significant adaptation. We demonstrate that this improved performance of maximum parsimony is attributable more to ASRV than to non-uniform character substitutions. Conclusion Maximum parsimony, as well as most other phylogeny reconstruction methods, may perform significantly better on actual biological data than is currently suggested by computer simulation studies because of natural
50 CFR 665.127 - Allowable gear and gear restrictions.
2010-10-01
... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Allowable gear and gear restrictions. 665... Fisheries § 665.127 Allowable gear and gear restrictions. (a) American Samoa coral reef ecosystem MUS may be taken only with the following allowable gear and methods: (1) Hand harvest; (2) Spear; (3) Slurp gun;...
50 CFR 665.427 - Allowable gear and gear restrictions.
2010-10-01
... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Allowable gear and gear restrictions. 665... Archipelago Fisheries § 665.427 Allowable gear and gear restrictions. (a) Mariana coral reef ecosystem MUS may be taken only with the following allowable gear and methods: (1) Hand harvest; (2) Spear; (3)...
50 CFR 665.227 - Allowable gear and gear restrictions.
2010-10-01
... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Allowable gear and gear restrictions. 665... Fisheries § 665.227 Allowable gear and gear restrictions. (a) Hawaii coral reef ecosystem MUS may be taken only with the following allowable gear and methods: (1) Hand harvest; (2) Spear; (3) Slurp gun;...
14 CFR 151.125 - Allowable advance planning costs.
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Allowable advance planning costs. 151.125... (CONTINUED) AIRPORTS FEDERAL AID TO AIRPORTS Rules and Procedures for Advance Planning and Engineering Proposals § 151.125 Allowable advance planning costs. (a) The United States' share of the allowable costs of...
24 CFR 891.785 - Adjustment of utility allowances.
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Adjustment of utility allowances... Handicapped Families and Individuals-Section 162 Assistance § 891.785 Adjustment of utility allowances. In... adjustment of utility allowances provided in § 891.440 apply....