On the magnitude of temperature decrease in the equatorial regions during the Last Glacial Maximum
王宁练; 姚檀栋; 施雅风; L.G.Thompson; J.Cole-Dai; P.-N.Lin; and; M.E.Davis
1999-01-01
Based on the data of temperature changes revealed by means of various palaeothermometric proxy indices,it is found that the magnitude of temperature decrease became large with altitude in the equatorial regions during the Last Glacial Maximum. The direct cause of this phenomenon was the change in temperature lapse rate, which was about(0.1±0.05)℃/100 m larger in the equator during the Last Glacial Maximum than at present. Moreover, the analyses show that CLIMAP possibly underestimated the sea surface temperature decrease in the equatorial regions during the Last Glacial Maximum.
Ghiyasvand Mehdi
2016-01-01
Full Text Available In this paper, a new problem on a directed network is presented. Let D be a feasible network such that all arc capacities are equal to U. Given a t > 0, the network D with arc capacities U - t is called the t-network. The goal of the problem is to compute the largest t such that the t-network is feasible. First, we present a weakly polynomial time algorithm to solve this problem, which runs in O(log(nU maximum flow computations, where n is the number of nodes. Then, an O(m2n time approach is presented, where m is the number of arcs. Both the weakly and strongly polynomial algorithms are inspired by McCormick and Ervolina (1994.
U.S. Environmental Protection Agency — Wetlands act as filters, removing or diminishing the amount of pollutants that enter surface water. Higher values for percent of wetland cover (WETLNDSPCT) may be...
Percent Wetland Cover (Future)
U.S. Environmental Protection Agency — Wetlands act as filters, removing or diminishing the amount of pollutants that enter surface water. Higher values for percent of wetland cover (WETLNDSPCT) may be...
Karlsson, J S; Ostlund, N; Larsson, B; Gerdle, B
2003-10-01
Frequency analysis of myoelectric (ME) signals, using the mean power spectral frequency (MNF), has been widely used to characterize peripheral muscle fatigue during isometric contractions assuming constant force. However, during repetitive isokinetic contractions performed with maximum effort, output (force or torque) will decrease markedly during the initial 40-60 contractions, followed by a phase with little or no change. MNF shows a similar pattern. In situations where there exist a significant relationship between MNF and output, part of the decrease in MNF may per se be related to the decrease in force during dynamic contractions. This study estimated force effects on the MNF shifts during repetitive dynamic knee extensions. Twenty healthy volunteers participated in the study and both surface ME signals (from the right vastus lateralis, vastus medialis, and rectus femoris muscles) and the biomechanical signals (force, position, and velocity) of an isokinetic dynamometer were measured. Two tests were performed: (i) 100 repetitive maximum isokinetic contractions of the right knee extensors, and (ii) five gradually increasing static knee extensions before and after (i). The corresponding ME signal time-frequency representations were calculated using the continuous wavelet transform. Compensation of the MNF variables of the repetitive contractions was performed with respect to the individual MNF-force relation based on an average of five gradually increasing contractions. Whether or not compensation was necessary was based on the shape of the MNF-force relationship. A significant compensation of the MNF was found for the repetitive isokinetic contractions. In conclusion, when investigating maximum dynamic contractions, decreases in MNF can be due to mechanisms similar to those found during sustained static contractions (force-independent component of fatigue) and in some subjects due to a direct effect of the change in force (force-dependent component of fatigue
Percents Are Not Natural Numbers
Jacobs, Jennifer A.
2013-01-01
Adults are prone to treating percents, one representational format of rational numbers, as novel cases of natural number. This suggests that percent values are not differentiated from natural numbers; a conceptual shift from the natural numbers to the rational numbers has not yet occurred. This is most surprising, considering people are inundated…
Inspiration: One Percent and Rising
Walling, Donovan R.
2009-01-01
Inventor Thomas Edison once famously declared, "Genius is one percent inspiration and ninety-nine percent perspiration." If that's the case, then the students the author witnessed at the International Student Media Festival (ISMF) last November in Orlando, Florida, are geniuses and more. The students in the ISMF pre-conference workshop had much to…
U.S. Environmental Protection Agency — Percent reduction is based on the number of native species determined to be present as of 2015, compared with historical numbers documented prior to 1970. Data are...
Estimating a percent reduction in load
Millard, Steven P.
This article extends the work of Cohn et al. [1989] on estimating constituent loads to the problem of estimating a percent reduction in load. Three estimators are considered: the maximum likelihood (MLE), a ``bias-corrected'' maximum likelihood (BCMLE), and the minimum variance unbiased (MVUE). In terms of root-mean-square error, both the MVUE and BCMLE are superior to the MLE, and for the cases considered here there is no appreciable difference between the MVUE and the BCMLE. The BCMLE is constructed from quantities computed by most regression packages and is therefore simpler to compute than the MVUE (which involves approximating an infinite series). All three estimators are applied to a case study in which an agricultural tax in the Everglades agricultural area is tied to an observed percent reduction in phosphorus load. For typical hydrological data, very large sample sizes (of the order of 100 observations each in the baseline period and after) are required to estimate a percent reduction in load with reasonable precision.
Lionello, Piero; Conte, Dario; Marzo, Luigi; Scarascia, Luca
2017-04-01
The maximum level that water reaches during a storm along the coast has important consequences on coastal defences and coastal erosion. It depends on future sea level, storm surges, ocean wind generated waves, vertical land motion. The future sea level in turn depends on water mass addition and steric contributions (with a thermosteric and halosteric component). This study proposes a practical methodology for assessing the effects of these different factors (which need to be estimated at sub-regional scale) and applies it to a 7-member model ensemble of regional climate model simulations (developed and carried out in the CIRCE fp6 project) covering the period 1951-2050 under the A1B emission scenario. Sea level pressure and wind fields are used for forcing a hydro-dynamical shallow water model (HYPSE), wind fields are used for forcing a wave model (WAM), obtaining estimates of storm surges and ocean waves, respectively. Thermosteric and halosteric effects are diagnosed from the projections of sea temperature and salinity. Steric expansion and storminess are shown to be contrasting factors: in the next decades wave and storm surge maxima will decrease while thermosteric expansion will increase mean sea level. These two effects will to a large extent compensate each other, so that their superposition will increase/decrease the maximum water level along two comparable fractions of the coastline (about 15-20%) by the mid 21st century. However, mass addition across the Gibraltar Strait to the Mediterranean Sea will likely become the dominant factor and determine an increase of the maximum water level along most of the coastline.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
The Algebra of the Cumulative Percent Operation.
Berry, Andrew J.
2002-01-01
Discusses how to help students avoid some pervasive reasoning errors in solving cumulative percent problems. Discusses the meaning of ."%+b%." the additive inverse of ."%." and other useful applications. Emphasizes the operational aspect of the cumulative percent concept. (KHR)
Alzheimer's Deaths Jump 55 Percent: CDC
... page: https://medlineplus.gov/news/fullstory_165941.html Alzheimer's Deaths Jump 55 Percent: CDC More patients also ... News) -- As more baby boomers age, deaths from Alzheimer's disease have jumped 55 percent, and in a ...
9 CFR 381.168 - Maximum percent of skin in certain poultry products.
2010-01-01
... poultry products. 381.168 Section 381.168 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY ORGANIZATION AND TERMINOLOGY; MANDATORY MEAT AND POULTRY PRODUCTS INSPECTION AND VOLUNTARY INSPECTION AND CERTIFICATION POULTRY PRODUCTS INSPECTION REGULATIONS Definitions...
7 CFR 762.129 - Percent of guarantee and maximum loss.
2010-01-01
... portion including: (1) The pro rata share of principal and interest indebtedness as evidenced by the note or by assumption agreement; (2) Any loan subsidy due and owing; (3) The pro rata share of principal... agreement. Provided that the lender has paid the Agency its pro rata share of the recapture amount due....
Maturo, Anthony J.
2002-01-01
Don't ever take your support staff for granted. By support staff, I mean the people in personnel, logistics, and finance; the ones who can make things happen with a phone call or a signature, or by the same token frustrate you to no end by their inaction; these are people you must depend on. I've spent a lot of time thinking about how to cultivate relationships with my support staff that work to the advantage of both of us. The most important thing that have learned working with people, any people--and I will tell you how I learned this in a minute--is there are some folks you just can't motivate, so forget it, don't try; others you certainly can with a little psychology and some effort; and the best of the bunch, what I call the 80 percenters, you don't need to motivate because they're already on the team and performing beautifully. The ones you can't change are rocks. Face up to it, and just kick them out of your way. I have a reputation with the people who don't want to perform or be part of the team. They don't come near me. If someone's a rock, I pick up on it right away, and I will walk around him or her to find someone better. The ones who can be motivated I take time to nurture. I consider them my projects. A lot of times these wannabes are people who want to help but don't know how. Listen, you can work with them. Lots of people in organizations have the mindset that all that matters are the regulations. God forbid if you ever work outside those regulations. They've got one foot on that regulation and they're holding it tight like a baby holds a blanket. What you're looking for is that first sign that their minds are opening. Usually you hear it in their vocabulary. What used to sound like "We can't do that ... the regulations won't allow it ... we have never done this before," well, suddenly that changes to "We have options ... let's take a look at the options ... let me research this and get back to you." The 80 percenters you want to nurture too, but
Percent area coverage through image analysis
Wong, Chung M.; Hong, Sung M.; Liu, De-Ling
2016-09-01
The notion of percent area coverage (PAC) has been used to characterize surface cleanliness levels in the spacecraft contamination control community. Due to the lack of detailed particle data, PAC has been conventionally calculated by multiplying the particle surface density in predetermined particle size bins by a set of coefficients per MIL-STD-1246C. In deriving the set of coefficients, the surface particle size distribution is assumed to follow a log-normal relation between particle density and particle size, while the cross-sectional area function is given as a combination of regular geometric shapes. For particles with irregular shapes, the cross-sectional area function cannot describe the true particle area and, therefore, may introduce error in the PAC calculation. Other errors may also be introduced by using the lognormal surface particle size distribution function that highly depends on the environmental cleanliness and cleaning process. In this paper, we present PAC measurements from silicon witness wafers that collected fallouts from a fabric material after vibration testing. PAC calculations were performed through analysis of microscope images and compare them to values derived through the MIL-STD-1246C method. Our results showed that the MIL-STD-1246C method does provide a reasonable upper bound to the PAC values determined through image analysis, in particular for PAC values below 0.1.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
China＇s imports Up 15 Percent in 2002
无
2003-01-01
Based on the latest statistical figure released from China Customs, China's total imports rose 15 percent in 2002 as compared to the previous year. The spending for China's crude oil import rose 9.4 percent to 12.757 billion yuan in 2002 while the importing volume of oil products dropped 4.9 percent to 20.34
EPA guidance on complying with the federal compatibility requirement for underground storage tank (UST) systems storing gasoline containing greater than 10 percent ethanol or diesel containing greater than 20 percent biodiesel.
Barkalow, R.H.; Jackson, J.J.; Gell, M.; Leverant, G.R.
1975-01-01
The eutectic alloy Ni-20.0 percent Nb-2.5 percent Al-6.0 percent Cr was tested in short-term creep and long-term exposure to service conditions to assess its suitability for high temperature turbine blade applications. Long-time exposure showed the lamellar microstructure of the alloy to be exceptionally stable. Other properties tested were notch sensitivity, isothermal and thermomechanical fatigue strength, shear strength, and transverse ductility. It was shown that this alloy is superior to the best currently available directionally solidified superalloys over the temperature/stress conditions encountered in turbine airfoils.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
CRED Cumulative Map of Percent Scleractinian Coral Cover at Zealandia
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Maug
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Tutuila
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Guguan
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Arakane
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Saipan
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Sarigan
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Agrihan
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Anatahan
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
王子明; 张鹏; 种铁; 赵丽华
2004-01-01
Objective To evaluate using prostate specific antigen (PSA) and percent free PSA(fPSA) for the diagnosis of prostate cancer(Pca) and benign prostate hyperplasia(BPH). Methods 315 men with BPH and 55 men with Pca were randomly chosen, serum fPSA and total PSA were determined by ELISA and then we compared the sensitivity and specificity of PSA and percent fPSA for the diagnosis of Pca. Results While using PSA and percent fPSA for the diagnosis of prostate cancer, the sensitivity was similar (89.8% vs. 94.5%, P>0.05), but the specificity was significanty different (52.7% vs. 89.8%, P<0.005). Conclusions Using percent fPSA might decrease false-positive and avoid 37.1% negative biopsies as compared with PSA, it is very valuable for the diagnosis of Pca.
Hemperly, V.C.
1976-05-19
The uranium-2 wt percent molybdenum alloy was prepared, processed, and age hardened to meet a minimum 930-MPa yield strength (0.2 percent) with a minimum of 10 percent elongation. These mechanical properties were obtained with a carbon level up to 300 ppM in the alloy. The tensile-test ductility is lowered by the humidity of the laboratory atmosphere. (auth)
The Texas Ten Percent Plan's Impact on College Enrollment
Daugherty, Lindsay; Martorell, Paco; McFarlin, Isaac, Jr.
2014-01-01
The Texas Ten Percent Plan (TTP) provides students in the top 10 percent of their high-school class with automatic admission to any public university in the state, including the two flagship schools, the University of Texas at Austin and Texas A&M. Texas created the policy in 1997 after a federal appellate court ruled that the state's previous…
The Texas Ten Percent Plan's Impact on College Enrollment
Daugherty, Lindsay; Martorell, Paco; McFarlin, Isaac, Jr.
2014-01-01
The Texas Ten Percent Plan (TTP) provides students in the top 10 percent of their high-school class with automatic admission to any public university in the state, including the two flagship schools, the University of Texas at Austin and Texas A&M. Texas created the policy in 1997 after a federal appellate court ruled that the state's previous…
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Yeomans, C M; Hoffman, C A
1953-01-01
Thermal-shock resistance of a ceramic comprising 60 percent boron carbide and 40 percent titanium diboride was investigated. The material has thermal shock resistance comparable to that of NBS body 4811C and that of zirconia, but is inferior to beryllia, alumina, and titanium-carbide ceramals. It is not considered suitable for turbine blades.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Analysis of Percent Elongation for Ductile Metal in Uniaxial Tension
WANG Xue-bin; YANG Mei; JIANG Jian
2005-01-01
Percent elongation of ductile metal in uniaxial tension due to non-homogeneity was analyzed based on gradient-dependent plasticity. Three assumptions are used to get the analytical solution of percent elongation: one is static equilibrium condition in axial direction; another is that plastic volumetric strain is zero in necking zone;the other is that the diameter in unloading zone remains constant after strain localization is initiated. The strain gradient term was introduced into the yield function of classical plastic mechanics to obtain the analytical solution of distributed plastic strain. Integrating the plastic strain and considering the influence of necking on plastic elongation, a one-dimensional analytical solution of percent elongation was proposed. The analytical solution shows that the percent elongation is inversely proportional to the gauge length, and the solution is formally similar to earlier empirical formula proposed by Barba. Comparisons of existing experimental results and present analytical solutions for relation between load and total elongation and for relation between percent elongation and gauge lengthwere carried out and the new mechanical model for percent elongation was verified. Moreover, higher ductility,toughness and heterogeneity can cause much larger percentage elongation, which coincides with usual viewpoints.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Percent body fat, fractures and risk of osteoporosis in women.
Wyshak, G
2010-06-01
Globally, in an aging population, osteoporosis and fractures are emerging as major public health problems; accessible and affordable recognition, prevention and treatment strategies are needed. Percent body fat is known to be associated with bone mineral density and fractures. This paper uses an innovative, virtually cost-free method to estimate percent body fat from age, height and weight, and assesses its validity by examining the association between percent body fat and fractures among women 39 and older. An epidemiologic study. 3940 college alumnae, median age 53.6, participated by responding to a mailed questionnaire covering medical history, behavioral factors, birth date, weight and height. T-tests, chi-square and multivariable logistic regression. Percent body fat estimated from age, weight, height and gender. Associations of fractures with percent body fat are expressed as odds ratios: for osteoporotic (wrist, hip and/or x-ray confirmed vertebral), the adjusted OR = 2.41, 95% CI (1.65, 3.54), P age, height and weight may be a valid, cost-saving, and cost-effective alternative tool for screening and assessing risk of osteoporosis in settings where Dual x-ray absorptiometry (DXA) or other radiological techniques are too costly or unavailable.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Comparing proton conductivity of polymer electrolytes by percent conducting volume
Kim, Yu Seung [Los Alamos National Laboratory; Pivovar, Bryan [NREL
2009-01-01
Proton conductivity of sulfonated polymers plays a key role in polymer electrolyte membrane fuel cells. Mass based water uptake and ion exchange capacity of sulfonated polymers have been failed to correlating their proton conductivity. In this paper, we report a length scale parameter, percent conductivity volume, which is rather simply obtained from the chemical structure of polymer to compare proton conductivity of wholly aromatic sulfonated polymer perflurosulfonic acid. Morphology effect on proton conductivity at lower RH conditions is discussed using the percent conductivity volume parameter.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating
Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen
2012-01-01
This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…
35 GHz integrated circuit rectifying antenna with 33 percent efficiency
Yoo, T.-W.; Chang, K.
1991-01-01
A 35 GHz integrated circuit rectifying antenna (rectenna) has been developed using a microstrip dipole antenna and beam-lead mixer diode. Greater than 33 percent conversion efficiency has been achieved. The circuit should have applications in microwave/millimeter-wave power transmission and detection.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Serum Predictors of Percent Lean Mass in Young Adults.
Lustgarten, Michael S; Price, Lori L; Phillips, Edward M; Kirn, Dylan R; Mills, John; Fielding, Roger A
2016-08-01
Lustgarten, MS, Price, LL, Phillips, EM, Kirn, DR, Mills, J, and Fielding, RA. Serum predictors of percent lean mass in young adults. J Strength Cond Res 30(8): 2194-2201, 2016-Elevated lean (skeletal muscle) mass is associated with increased muscle strength and anaerobic exercise performance, whereas low levels of lean mass are associated with insulin resistance and sarcopenia. Therefore, studies aimed at obtaining an improved understanding of mechanisms related to the quantity of lean mass are of interest. Percent lean mass (total lean mass/body weight × 100) in 77 young subjects (18-35 years) was measured with dual-energy x-ray absorptiometry. Twenty analytes and 296 metabolites were evaluated with the use of the standard chemistry screen and mass spectrometry-based metabolomic profiling, respectively. Sex-adjusted multivariable linear regression was used to determine serum analytes and metabolites significantly (p ≤ 0.05 and q ≤ 0.30) associated with the percent lean mass. Two enzymes (alkaline phosphatase and serum glutamate oxaloacetate aminotransferase) and 29 metabolites were found to be significantly associated with the percent lean mass, including metabolites related to microbial metabolism, uremia, inflammation, oxidative stress, branched-chain amino acid metabolism, insulin sensitivity, glycerolipid metabolism, and xenobiotics. Use of sex-adjusted stepwise regression to obtain a final covariate predictor model identified the combination of 5 analytes and metabolites as overall predictors of the percent lean mass (model R = 82.5%). Collectively, these data suggest that a complex interplay of various metabolic processes underlies the maintenance of lean mass in young healthy adults.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Percent Errors in the Estimation of Demand for Secondary Items.
1985-11-01
percent errors, and the program change factor (PCF) to predict item demana during the procurement *’ leadtime (PROLT) ior the item. The PCF accounts for...type of demand it was. It may"-- have been demanded over two years ago or it may nave been a non-recurring demana . Since CC b only retains two years of...observed distributions could be compared with negative binomial distributions. For each item the computed ratio of actual demana to expected demand was
The 50 percent solution to reducing energy costs.
Whitson, B Alan
2012-11-01
Hospitals can use a five-step process to achieve energy savings: Define a minimum acceptable ROI or hurdle rate. Seek incentives, rebates, and tax benefits. Set a 10-year investment horizon for all project portfolios. Create a system for tracking and reporting the operational and financial performance of the project portfolios. At the end of the year, return 50 percent of the savings to the facilities department and use the rest to fund additional projects.
Intertemporal discoordination in the 100 percent reserve banking system
Baeriswyl, Romain
2014-01-01
The 100%-Money Plan advocated by Fisher (1936) has a Misesian flavor as it aims at mitigating intertemporal discoordination by reducing (i) the discrepancy between investment and voluntary savings, and (ii) the manipulation of interest rates by monetary injections. Recent proposals to adopt the 100 percent reserve banking system, such as the Chicago Plan Revisited by Benes and Kumhof (2013) or the Limited Purpose Banking by Kotlikoff (2010), take, however, a fundamentally different attitude t...
Diminishing returns from increased percent Bt cotton: the case of pink bollworm.
Huang, Yunxin; Wan, Peng; Zhang, Huannan; Huang, Minsong; Li, Zhaohua; Gould, Fred
2013-01-01
Regional suppression of pests by transgenic crops producing insecticidal proteins from Bacillus thuringiensis (Bt) has been reported in several cropping systems, but little is known about the functional relationship between the ultimate pest population density and the pervasiveness of Bt crops. Here we address this issue by analyzing 16 years of field data on pink bollworm (Pectinophora gossypiella) population density and percentage of Bt cotton in the Yangtze River Valley of China. In this region, the percentage of cotton hectares planted with Bt cotton increased from 9% in 2000 to 94% in 2009 and 2010. We find that as the percent Bt cotton increased over the years, the cross-year growth rate of pink bollworm from the last generation of one year to the first generation of the next year decreased. However, as the percent Bt cotton increased, the within-year growth rate of pink bollworm from the first to last generation of the same year increased, with a slope approximately opposite to that of the cross-year rates. As a result, we did not find a statistically significant decline in the annual growth rate of pink bollworm as the percent Bt cotton increased over time. Consistent with the data, our modeling analyses predict that the regional average density of pink bollworm declines as the percent Bt cotton increases, but the higher the percent Bt cotton, the slower the decline in pest density. Specifically, we find that 95% Bt cotton is predicted to cause only 3% more reduction in larval density than 80% Bt cotton. The results here suggest that density dependence can act against the decline in pest density and diminish the net effects of Bt cotton on suppression of pink bollworm in the study region. The findings call for more studies of the interactions between pest density-dependence and Bt crops.
Red blood cell decreases of microgravity
Johnson, P. C.
1985-01-01
Postflight decreases in red blood cell mass (RBCM) have regularly been recorded after exposure to microgravity. These 5-25 percent decreases do not relate to the mission duration, workload, caloric intake or to the type of spacecraft used. The decrease is accompanied by normal red cell survivals, increased ferritin levels, normal radioactive iron studies, and increases in mean red blood cell volume. Comparable decreases in red blood cell mass are not found after bed rest, a commonly used simulation of the microgravity state. Inhibited bone marrow erythropoiesis has not been proven to date, although reticulocyte numbers in the peripheral circulation are decreased about 50 percent. To date, the cause of the microgravity induced decreases in RBCM is unknown. Increased splenic trapping of circulating red blood cells seem the most logical way to explain the results obtained.
A 99 percent purity molecular sieve oxygen generator
Miller, G. W.
1991-01-01
Molecular sieve oxygen generating systems (MSOGS) have become the accepted method for the production of breathable oxygen on military aircraft. These systems separate oxygen for aircraft engine bleed air by application of pressure swing adsorption (PSA) technology. Oxygen is concentrated by preferential adsorption in nitrogen in a zeolite molecular sieve. However, the inability of current zeolite molecular sieves to discriminate between oxygen and argon results in an oxygen purity limitations of 93-95 percent (both oxygen and argon concentrate). The goal was to develop a new PSA process capable of exceeding the present oxygen purity limitations. A novel molecular sieve oxygen concentrator was developed which is capable of generating oxygen concentrations of up to 99.7 percent directly from air. The process is comprised of four absorbent beds, two containing a zeolite molecular sieve and two containing a carbon molecular sieve. This new process may find use in aircraft and medical breathing systems, and industrial air separation systems. The commercial potential of the process is currently being evaluated.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Systolic Pressure in Different Percents of Stenosis at Major Arteries
Mirzaee, Mohammad Reza; Firoozabadi, Bahar; Dandaneband, Meitham
2016-01-01
- Modeling Human cardiovascular system is always an important issue. One of the most effective methods is using lumped model to reach to a complete model of human cardiovascular system. Such modeling with advanced considerations is used in this paper. Some of these considerations are as follow: Exact simulating of ventricles as pressure suppliers, peristaltic motion of descending arteries as additional suppliers, and dividing each vessel into more than one compartment to reach more accurate answers. Finally a circuit with more than 150 RLC segments and different elements is made. Then the verification of our complex circuit is done and at the end, obstruction as an important abnormality is investigated. For this aim different percents of obstruction in vital arteries are considered and the results are brought as different graphs at the end. According to physiological texts the citation of our simulation and its results are obvious. To earn productive information about arteries characteristics a 36-vessels mod...
Held, Louis F.; Pritchard, Ernest I.
1946-01-01
An investigation was conducted to evaluate the possibilities of utilizing the high-performance characteristics of triptane and xylidines blended with 28-R fuel in order to increase fuel economy by the use of high compression ratios and maximum-economy spark setting. Full-scale single-cylinder knock tests were run with 20 deg B.T.C. and maximum-economy spark settings at compression ratios of 6.9, 8.0, and 10.0, and with two inlet-air temperatures. The fuels tested consisted of triptane, four triptane and one xylidines blend with 28-R, and 28-R fuel alone. Indicated specific fuel consumption at lean mixtures was decreased approximately 17 percent at a compression ratio of 10.0 and maximum-economy spark setting, as compared to that obtained with a compression ratio of 6.9 and normal spark setting. When compression ratio was increased from 6.9 to 10.0 at an inlet-air temperature of 150 F, normal spark setting, and a fuel-air ratio of 0.065, 55-percent triptane was required with 28-R fuel to maintain the knock-limited brake power level obtained with 28-R fuel at a compression ratio of 6.9. Brake specific fuel consumption was decreased 17.5 percent at a compression ratio of 10.0 relative to that obtained at a compression ratio of 6.9. Approximately similar results were noted at an inlet-air temperature of 250 F. For concentrations up through at least 20 percent, triptane can be more efficiently used at normal than at maximum-economy spark setting to maintain a constant knock-limited power output over the range of compression ratios tested.
One Percent Strömvil Photometry in M 67
Philip, A. G. D.; Boyle, R. P.; Janusz, R.
2005-05-01
The Vatican Advanced Technology Telescope on Mt. Graham is being used in a program of CCD photometry of open and globular clusters. We are using the Ströomvil System (Straižys et al. 1996), a combination of the Strömgren and Vilnius Systems. This system allows stars to be classified as to temperature, surface gravity, metallicity and reddening from the photometric measures alone. However, to make accurate estimates of the stellar parameters the photometry should be accurate to 1 or 1.5 percent. In our initial runs on the VATT we did not achieve this accuracy. The problem turned out to be scattered light in the telescope and this has now been reduced so we can do accurate photometry. Boyle has written a routine in IRAF which allows us to correct the flats for any differences. We take rotated frames and also frames which are offset in position by one third of a frame, east-west and north-south. Measures of the offset stars give us the corrections that need to be made to the flat. Robert Janusz has written a program, the CommandLog, which allows us to paste IRAF commands in the correct order to reduce measures made on a given observing run. There is an automatic version where one can test various parameters and get a set of solutions. Now we have a set of Strömvil frames in the open cluster, M 67 and we compare our color-magnitude diagram with those of BATC (Fan et al. 1996) and Vilnius (Boyle et al. 1998). A preliminary report of the M 67 photometry will be found in Laugalys et al. (2004). Here we report on a selected set of stars in the M 67 frames, those with errors 1 percent or less.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Effects of covert subject actions on percent body fat by air-displacement plethsymography.
Tegenkamp, Michelle H; Clark, R Randall; Schoeller, Dale A; Landry, Greg L
2011-07-01
Air-displacement plethysmography (ADP) is used for estimation of body composition, however, some individuals, such as athletes in weight classification sports, may use covert methods during ADP testing to alter their apparent percent body fat. The purpose of this study was to examine the effect of covert subject actions on percent body fat measured by ADP. Subjects underwent body composition analysis in the Bod Pod following the standard procedure using the manufacturer's guidelines. The subjects then underwent 8 more measurements while performing the following intentional manipulations: 4 breathing patterns altering lung volume, foot movement to disrupt air, hand cupping to trap air, and heat and cold exposure before entering the chamber. Increasing and decreasing lung volume during thoracic volume measurement and during body density measurement altered the percent body fat assessment (p < 0.001). High lung volume during thoracic gas measures overestimated fat by 3.7 ± 2.1 percentage points. Lowered lung volume during body volume measures overestimated body fat by an additional 2.2 ± 2.1 percentage points. The heat and cold exposure, tapping, and cupping treatments provided similar estimates of percent body fat when compared with the standard condition. These results demonstrate the subjects were able to covertly change their estimated ADP body composition value by altering breathing when compared with the standard condition. We recommend that sports conditioning coaches, athletic trainers, and technicians administering ADP should be aware of the potential effects of these covert actions. The individual responsible for administering ADP should remain vigilant during testing to detect deliberate altered breathing patterns by athletes in an effort to gain a competitive advantage by manipulating their body composition assessment.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Peng Zhang; Ziming Wang; Tie Chong; Lihua Zhao
2007-01-01
Objective: To measure the percent of free prostate specific antigen (fPSA) among men without prostate diseases in Xi'an area, and to study the relationship of percent fPSA with age and pathological grade, clinical stage of prostate cancer (PCa) with percent fPSA, and to analyze the difference between the data in China and theoverseas data to determine appropriate reference range for Chinese male. Methods: A total of 713 participants were enrolled into the study, with PSA, fPSA in serum measured and the percent fPSA calculated. Out of 713 cases, 679 without prostate diseases were divided into 5 groups by age, and then the relationships of PSA, fPSA and percent fPSA with age were studied, respectively. The relationship of pathological grade and clinical stage with percent fPSA of the 34 participants with PCa was also studied. With the help of the related data of men without prostate disease, the appropriate reference range for Chinese male was established. Results: The increases in PSA or fPSA were correlated with age, while there was no significant correlation between age and percent fPSA. The percent fPSA was also correlated with pathological grade and clinical stage of PCa. The percent fPSA of men without prostate disease in Xi'an area was significantly lower than that in the related overseas data. The reference range of percent fPSA for Chinese male was≥15%. Conclusion: Percent fPSA might be more useful than PSA in the detection of prostate cancer. As the percent fPSA is decreased, the pathological grade is decreased, and the clinical stage is increased, the malignant degree is increased. The reference range of≥15% is more appropriate for Chinese male.
25 CFR 141.36 - Maximum finance charges on pawn transactions.
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Maximum finance charges on pawn transactions. 141.36... PRACTICES ON THE NAVAJO, HOPI AND ZUNI RESERVATIONS Pawnbroker Practices § 141.36 Maximum finance charges on pawn transactions. No pawnbroker may impose an annual finance charge greater than twenty-four percent...
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Gaonkar, Narayan; Vaidya, R G
2016-05-01
A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Near Zero Emissions at 50 Percent Thermal Efficiency
None, None
2012-12-31
Detroit Diesel Corporation (DDC) has successfully completed a 10 year DOE sponsored heavy-duty truck engine program, hereafter referred to as the NZ-50 program. This program was split into two major phases. The first phase was called Near-Zero Emission at 50 Percent Thermal Efficiency, and was completed in 2007. The second phase was initiated in 2006, and this phase was named Advancements in Engine Combustion Systems to Enable High-Efficiency Clean Combustion for Heavy-Duty Engines. This phase was completed in September, 2010. The key objectives of the NZ-50 program for this first phase were to: Quantify thermal efficiency degradation associated with reduction of engine-out NOx emissions to the 2007 regulated level of ~1.1 g/hp-hr. Implement an integrated analytical/experimental development plan for improving subsystem and component capabilities in support of emerging engine technologies for emissions and thermal efficiency goals of the program. Test prototype subsystem hardware featuring technology enhancements and demonstrate effective application on a multi-cylinder, production feasible heavy-duty engine test-bed. Optimize subsystem components and engine controls (calibration) to demonstrate thermal efficiency that is in compliance with the DOE 2005 Joule milestone, meaning greater than 45% thermal efficiency at 2007 emission levels. Develop technology roadmap for meeting emission regulations of 2010 and beyond while mitigating the associated degradation in engine fuel consumption. Ultimately, develop technical prime-path for meeting the overall goal of the NZ-50 program, i.e., 50% thermal efficiency at 2010 regulated emissions. These objectives were successfully met during the course of the NZ-50 program. The most noteworthy achievements in this program are summarized as follows: Demonstrated technologies through advanced integrated experiments and analysis to achieve the technical objectives of the NZ-50 program with 50.2% equivalent thermal efficiency under
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Buzzard, R. J.; Metroka, R. R.
1973-01-01
The effect of controlled nitrogen additions was evaluated on the mechanical properties of T-111 (Ta-8W-2Hf) fuel pin cladding material proposed for use in a lithium-cooled nuclear reactor concept. Additions of 80 to 1125 ppm nitrogen resulted in increased strengthening of T-111 tubular section test specimens at temperatures of 25 to 1200 C. Homogeneous distributions of up to 500 ppm nitrogen did not seriously decrease tensile ductility. Both single and two-phase microstructures, with hafnium nitride as the second phase, were evaluated in this study.
AmalMAboEl-Maaty; GamalA ElSisy; MonaHShaker; OmimaH Ezzo
2014-01-01
Objectives:To study the effect of age and body fat on leptin levels and semen parameters of Arab horse.Methods:Fifteen fertileArab stallions of different ages belonging toPoliceAcademy were divided into three equal groups according to their age.Old horses are those of >18 yeas (18-27),Mid-age horses≥13 to18 years(13-18),Young horses are those of <12 years(7-11). Semen was evaluated three times for each stallion.Blood and seminal plasma were assayed for measuring leptin, testosterone and estradiol.Subcutaneous rump fat thickness was measured using ultrasound for estimating body fat percent and fat mass percent.Results:All body fat parameters were significantly high inYoung stallions and decreased with increasing age.As age increased, testosterone levels increases but leptin levels decreased.Age was inversely correlated with fat%, fat mass and leptin.All fat parameters had direct correlation with leptin in semen and serum but an inverse one with serum testosterone.Serum leptin directly correlated with sperm cell concentration inMid- age stallions and inversely correlated with percent of live sperm in Old stallions.Semen leptin correlated directly with both percent of live sperm and percent of abnormal sperm inOld stallions.Conclusion:This study proved that aging in stallions is related to a drop in fertility, a decrease in body fat and in turn leptin.Arab stallions of age7 to18 years could be used in the breeding efficiently.
Near Zero Emissions at 50 Percent Thermal Efficiency
None, None
2012-12-31
Detroit Diesel Corporation (DDC) has successfully completed a 10 year DOE sponsored heavy-duty truck engine program, hereafter referred to as the NZ-50 program. This program was split into two major phases. The first phase was called Near-Zero Emission at 50 Percent Thermal Efficiency, and was completed in 2007. The second phase was initiated in 2006, and this phase was named Advancements in Engine Combustion Systems to Enable High-Efficiency Clean Combustion for Heavy-Duty Engines. This phase was completed in September, 2010. The key objectives of the NZ-50 program for this first phase were to: Quantify thermal efficiency degradation associated with reduction of engine-out NOx emissions to the 2007 regulated level of ~1.1 g/hp-hr. Implement an integrated analytical/experimental development plan for improving subsystem and component capabilities in support of emerging engine technologies for emissions and thermal efficiency goals of the program. Test prototype subsystem hardware featuring technology enhancements and demonstrate effective application on a multi-cylinder, production feasible heavy-duty engine test-bed. Optimize subsystem components and engine controls (calibration) to demonstrate thermal efficiency that is in compliance with the DOE 2005 Joule milestone, meaning greater than 45% thermal efficiency at 2007 emission levels. Develop technology roadmap for meeting emission regulations of 2010 and beyond while mitigating the associated degradation in engine fuel consumption. Ultimately, develop technical prime-path for meeting the overall goal of the NZ-50 program, i.e., 50% thermal efficiency at 2010 regulated emissions. These objectives were successfully met during the course of the NZ-50 program. The most noteworthy achievements in this program are summarized as follows: Demonstrated technologies through advanced integrated experiments and analysis to achieve the technical objectives of the NZ-50 program with 50.2% equivalent thermal efficiency under
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Luo, Haibiao; Goldstein, Irwin; Udelson, Daniel
2007-05-01
Percent corporal smooth muscle content, a traditional predictor of corporal veno-occlusive function, is invasive and clinically assessed by histomorphometric analyses of erectile tissue biopsies. Cavernosal "expandability" which may be a more physiologically relevant parameter is a measure of work performed to achieve penile erection, and as a consequence, an indicator of the ability to approach maximum penile volume at low intracavernosal pressure. To demonstrate that cavernosal "expandability" determined by noninvasive methodology can replace the determination of percent smooth muscle. To predict Young's modulus for the corpora cavernosa in rabbits and, this by inference, in humans; the latter facilitates the comparison of resistance to penile expansion presented by the tunica vs. cavernosal tissue. A refined three-dimensional formula for cavernosal expandability, defined as the negative reciprocal of the cavernosal bulk modulus in the semierect state, was derived as a function of percent corporal smooth muscle content, using principles of engineering mechanics of materials. The model included Young's modulus, E, for the corpora cavernosa as an unknown parameter. Volume-pressure data obtained from three groups of New Zealand white rabbits: (i) control group (N = 7); (ii) hypercholesterolemic group (N = 5) on 0.5%; (iii) atherosclerotic group (N = 8), was plotted, and compared with the model. Data points of mean cavernosal expandability (0.012-0.017 (mm Hg)(-1)) vs. percent trabecular smooth muscle content (33.9-45.4%) for the three groups of rabbits were analyzed. The revised model formula was fitted to the existing rabbit experimental data points producing a value of Young's modulus equal to 0.01 (MPa). Rabbit cavernosal expandability can predict percent smooth muscle content. Cavernosal Young's modulus can be predicted. Further clinical research efforts to provide human data are needed.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
1990-01-01
It is NASA's intent to provide small disadvantaged businesses, including women-owned, historically black colleges and universities and minority education institutions the maximum practicable opportunity to receive a fair proportion of NASA prime and subcontracted awards. Annually, NASA will establish socioeconomic procurement goals including small disadvantaged business goals, with a target of reaching the eight percent level by the end of FY 1994. The NASA Associate Administrators, who are responsible for the programs at the various NASA Centers, will be held accountable for full implementation of the socioeconomic procurement plans. Various aspects of this plan, including its history, are discussed.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Decreasing relative risk premium
Hansen, Frank
2007-01-01
such that the corresponding relative risk premium is a decreasing function of present wealth, and we determine the set of associated utility functions. We find a new characterization of risk vulnerability and determine a large set of utility functions, closed under summation and composition, which are both risk vulnerable...... and have decreasing relative risk premium. We finally introduce the notion of partial risk neutral preferences on binary lotteries and show that partial risk neutrality is equivalent to preferences with decreasing relative risk premium...
Mitigation of maximum world oil production: Shortage scenarios
Hirsch, Robert L. [Management Information Services, Inc., 723 Fords Landing Way, Alexandria, VA 22314 (United States)
2008-02-15
A framework is developed for planning the mitigation of the oil shortages that will be caused by world oil production reaching a maximum and going into decline. To estimate potential economic impacts, a reasonable relationship between percent decline in world oil supply and percent decline in world GDP was determined to be roughly 1:1. As a limiting case for decline rates, giant fields were examined. Actual oil production from Europe and North America indicated significant periods of relatively flat oil production (plateaus). However, before entering its plateau period, North American oil production went through a sharp peak and steep decline. Examination of a number of future world oil production forecasts showed multi-year rollover/roll-down periods, which represent pseudoplateaus. Consideration of resource nationalism posits an Oil Exporter Withholding Scenario, which could potentially overwhelm all other considerations. Three scenarios for mitigation planning resulted from this analysis: (1) A Best Case, where maximum world oil production is followed by a multi-year plateau before the onset of a monatomic decline rate of 2-5% per year; (2) A Middling Case, where world oil production reaches a maximum, after which it drops into a long-term, 2-5% monotonic annual decline; and finally (3) A Worst Case, where the sharp peak of the Middling Case is degraded by oil exporter withholding, leading to world oil shortages growing potentially more rapidly than 2-5% per year, creating the most dire world economic impacts. (author)
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Decreasing Relative Risk Premium
Hansen, Frank
We consider the risk premium demanded by a decision maker with wealth x in order to be indifferent between obtaining a new level of wealth y1 with certainty, or to participate in a lottery which either results in unchanged present wealth or a level of wealth y2 > y1. We define the relative risk...... premium as the quotient between the risk premium and the increase in wealth y1–x which the decision maker puts on the line by choosing the lottery in place of receiving y1 with certainty. We study preferences such that the relative risk premium is a decreasing function of present wealth, and we determine...... relative risk premium in the small implies decreasing relative risk premium in the large, and decreasing relative risk premium everywhere implies risk aversion. We finally show that preferences with decreasing relative risk premium may be equivalently expressed in terms of certain preferences on risky...
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Decreasing Serial Cost Sharing
Hougaard, Jens Leth; Østerdal, Lars Peter
The increasing serial cost sharing rule of Moulin and Shenker [Econometrica 60 (1992) 1009] and the decreasing serial rule of de Frutos [Journal of Economic Theory 79 (1998) 245] have attracted attention due to their intuitive appeal and striking incentive properties. An axiomatic characterization...... of the increasing serial rule was provided by Moulin and Shenker [Journal of Economic Theory 64 (1994) 178]. This paper gives an axiomatic characterization of the decreasing serial rule...
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Titran, Robert H.; Uz, Mehmet
1994-01-01
Effects of thermomechanical processing on the mechanical properties of Nb-1 wt. percent Zr-0.1 wt. percent C, a candidate alloy for use in advanced space power systems, were investigated. Sheet bars were cold rolled into 1-mm thick sheets following single, double, or triple extrusion operations at 1900 K. All the creep and tensile specimens were given a two-step heat treatment 1 hr at 1755 K + 2 hr 1475 K prior to testing. Tensile properties were determined at 300 as well as at 1350 K. Microhardness measurements were made on cold rolled, heat treated, and crept samples. Creep tests were carried out at 1350 K and 34.5 MPa for times of about 10,000 to 19,000 hr. The results show that the number of extrusions had some effects on both the microhardness and tensile properties. However, the long-time creep behavior of the samples were comparable, and all were found to have adequate properties to meet the design requirements of advanced power systems regardless of thermomechanical history. The results are discussed in correlation with processing and microstructure, and further compared to the results obtained from the testing of Nb-1 wt. percent Zr and Nb-1 wt. percent Zr-0.06 wt. percent C alloys.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Lisianski Island, 2001-2004
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
Evaluation of three percent Aqueous Film Forming Foam (AFFF) concentrates as fire fighting agents
Jablonski, E. J.
1981-04-01
A large-scale fire test program involving 20,000-square foot JP-4 fuel fires was conducted to evaluate the fire suppression effectiveness and compatibility of 3 percent Aqueous Film Forming Foam (AFFF) agents in Air Force fire fighting vehicles. Three commercially available 3 percent AFFF concentrates were tested in accordance with military specification MIL-F-24385B. Test results are summarized in Appendix A. As a result of these tests, an updated Revision C to this MIL SPEC has been accomplished with new requirements for both 3 percent and 6 percent AFFF extinguishing agents.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Midway Atoll, 2002-04
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at St. Rogatien West, 2001
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Niihau, 2005
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Guam, 2003
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at French Frigate Shoals
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Supply Reef
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Stingray Shoals
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Esmerelda Bank
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Santa Rosa Reef
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Necker Island, 2002-2004
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Palmyra Atoll, 2002-2004
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Maro Reef, 2001-2004
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Ta'u
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Ofu & Olosega
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Molokai, 2005
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Pearl and Hermes Atoll, 2002-2004
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Raita Bank, 2001
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Johnston Atoll, 2004
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Farallon de Pajaros (Uracas)
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
CRED Cumulative Map of Percent Scleractinian Coral Cover at Kauai, 2005
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.
Arora, A; Williams, B; Arora, A K; McNamara, R; Yates, J; Fielder, A
2005-01-01
Aim: To determine whether there has been a consistent change across countries and healthcare systems in the frequency of strabismus surgery in children over the past decade. Methods: Retrospective analysis of data on all strabismus surgery performed in NHS hospitals in England and Wales, on children aged 0–16 years between 1989 and 2000, and between 1994 and 2000 in Ontario (Canada) hospitals. These were compared with published data for Scotland, 1989–2000. Results: Between 1989 and 1999–2000 the number of strabismus procedures performed on children, 0–16 years, in England decreased by 41.2% from 15 083 to 8869. Combined medial rectus recession with lateral rectus resection decreased from 5538 to 3013 (45.6%) in the same period. Bimedial recessions increased from 489 to 762, oblique tenotomies from 43 to 121, and the use of adjustable sutures from 29 to 44, in 2000. In Ontario, operations for squint decreased from 2280 to 1685 (26.1%) among 0–16 year olds between 1994 and 2000. Conclusion: The clinical impression of decrease in the frequency of paediatric strabismus surgery is confirmed. In the authors’ opinion this cannot be fully explained by a decrease in births or by the method of healthcare funding. Two factors that might have contributed are better conservative strabismus management and increased subspecialisation that has improved the quality of surgery and the need for re-operation. This finding has a significant impact upon surgical services and also on the training of ophthalmologists. PMID:15774914
Decreasing relative risk premium
Hansen, Frank
2007-01-01
We consider the risk premium demanded by a decision maker in order to be indifferent between obtaining a new level of wealth with certainty, or to participate in a lottery which either results in unchanged wealth or an even higher level than what can be obtained with certainty. We study preferences...... such that the corresponding relative risk premium is a decreasing function of present wealth, and we determine the set of associated utility functions. We find a new characterization of risk vulnerability and determine a large set of utility functions, closed under summation and composition, which are both risk vulnerable...... and have decreasing relative risk premium. We finally introduce the notion of partial risk neutral preferences on binary lotteries and show that partial risk neutrality is equivalent to preferences with decreasing relative risk premium...
Škarabot, Jakob; Vigotsky, Andrew D.; Brown, Amanda Fernandes; Gomes, Thiago Matassoli; Novaes, Jefferson da Silva
2017-01-01
Background Foam rollers, or other similar devices, are a method for acutely increasing range of motion, but in contrast to static stretching, do not appear to have detrimental effects on neuromuscular performance. Purpose The purpose of this study was to investigate the effects of different volumes (60 and 120 seconds) of foam rolling of the hamstrings during the inter‐set rest period on repetition performance of the knee extension exercise. Methods Twenty‐five recreationally active females were recruited for the study (27.8 ± 3.6 years, 168.4 ± 7.2 cm, 69.1 ± 10.2 kg, 27.2 ± 2.1 m2/kg). Initially, subjects underwent a ten‐repetition maximum testing and retesting, respectively. Thereafter, the experiment involved three sets of knee extensions with a pre‐determined 10 RM load to concentric failure with the goal of completing the maximum number of repetitions. During the inter‐set rest period, either passive rest or foam rolling of different durations (60 and 120 seconds) in a randomized order was employed. Results Ninety‐five percent confidence intervals revealed dose‐dependent, detrimental effects, with more time spent foam rolling resulting in fewer repetitions (Cohen's d of 2.0 and 1.2 for 120 and 60 seconds, respectively, in comparison with passive rest). Conclusion The results of the present study suggest that more inter‐set foam rolling applied to the antagonist muscle group is detrimental to the ability to continually produce force. The finding that inter‐set foam rolling of the antagonist muscle group decreases maximum repetition performance has implications for foam rolling prescription and implementation, in both rehabilitation and athletic populations. Level of evidence 2b PMID:28217418
Monteiro, Estêvão Rios; Škarabot, Jakob; Vigotsky, Andrew D; Brown, Amanda Fernandes; Gomes, Thiago Matassoli; Novaes, Jefferson da Silva
2017-02-01
Foam rollers, or other similar devices, are a method for acutely increasing range of motion, but in contrast to static stretching, do not appear to have detrimental effects on neuromuscular performance. The purpose of this study was to investigate the effects of different volumes (60 and 120 seconds) of foam rolling of the hamstrings during the inter-set rest period on repetition performance of the knee extension exercise. Twenty-five recreationally active females were recruited for the study (27.8 ± 3.6 years, 168.4 ± 7.2 cm, 69.1 ± 10.2 kg, 27.2 ± 2.1 m(2)/kg). Initially, subjects underwent a ten-repetition maximum testing and retesting, respectively. Thereafter, the experiment involved three sets of knee extensions with a pre-determined 10 RM load to concentric failure with the goal of completing the maximum number of repetitions. During the inter-set rest period, either passive rest or foam rolling of different durations (60 and 120 seconds) in a randomized order was employed. Ninety-five percent confidence intervals revealed dose-dependent, detrimental effects, with more time spent foam rolling resulting in fewer repetitions (Cohen's d of 2.0 and 1.2 for 120 and 60 seconds, respectively, in comparison with passive rest). The results of the present study suggest that more inter-set foam rolling applied to the antagonist muscle group is detrimental to the ability to continually produce force. The finding that inter-set foam rolling of the antagonist muscle group decreases maximum repetition performance has implications for foam rolling prescription and implementation, in both rehabilitation and athletic populations. 2b.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Decreasing serial cost sharing
Hougaard, Jens Leth; Østerdal, Lars Peter Raahave
2009-01-01
The increasing serial cost sharing rule of Moulin and Shenker (Econometrica 60:1009-1037, 1992) and the decreasing serial rule of de Frutos (J Econ Theory 79:245-275, 1998) are known by their intuitive appeal and striking incentive properties. An axiomatic characterization of the increasing serial...
Decreasing Serial Cost Sharing
Hougaard, Jens Leth; Østerdal, Lars Peter
The increasing serial cost sharing rule of Moulin and Shenker [Econometrica 60 (1992) 1009] and the decreasing serial rule of de Frutos [Journal of Economic Theory 79 (1998) 245] have attracted attention due to their intuitive appeal and striking incentive properties. An axiomatic characterization...
Dickson, Lisa M.
2006-01-01
The purpose of this study is to determine how ending affirmative action in public colleges in Texas affected the percent of minority high school graduates applying to college. I find the end of affirmative action significantly lowered the percent of Hispanic students applying to college by 1.6 percentage points and significantly lowered the…
Effect of Physical Activity on BMI and Percent Body Fat of Chinese Girls.
Fu, Frank H.; And Others
1995-01-01
This study investigated the effect of regular physical activity on body mass index (BMI) and percent body fat of Chinese girls grouped by age and physical activity patterns. Measurements of skinfold, height, and weight, and BMI calculations, found differences in BMI and percent body fat between active and inactive girls. (SM)
Protein phosphatases decrease their activity during capacitation: a new requirement for this event.
Janetti R Signorelli
Full Text Available There are few reports on the role of protein phosphatases during capacitation. Here, we report on the role of PP2B, PP1, and PP2A during human sperm capacitation. Motile sperm were resuspended in non-capacitating medium (NCM, Tyrode's medium, albumin- and bicarbonate-free or in reconstituted medium (RCM, NCM plus 2.6% albumin/25 mM bicarbonate. The presence of the phosphatases was evaluated by western blotting and the subcellular localization by indirect immunofluorescence. The function of these phosphatases was analyzed by incubating the sperm with specific inhibitors: okadaic acid, I2, endothall, and deltamethrin. Different aliquots were incubated in the following media: 1 NCM; 2 NCM plus inhibitors; 3 RCM; and 4 RCM plus inhibitors. The percent capacitated sperm and phosphatase activities were evaluated using the chlortetracycline assay and a phosphatase assay kit, respectively. The results confirm the presence of PP2B and PP1 in human sperm. We also report the presence of PP2A, specifically, the catalytic subunit and the regulatory subunits PR65 and B. PP2B and PP2A were present in the tail, neck, and postacrosomal region, and PP1 was present in the postacrosomal region, neck, middle, and principal piece of human sperm. Treatment with phosphatase inhibitors rapidly (≤1 min increased the percent of sperm depicting the pattern B, reaching a maximum of ∼40% that was maintained throughout incubation; after 3 h, the percent of capacitated sperm was similar to that of the control. The enzymatic activity of the phosphatases decreased during capacitation without changes in their expression. The pattern of phosphorylation on threonine residues showed a sharp increase upon treatment with the inhibitors. In conclusion, human sperm express PP1, PP2B, and PP2A, and the activity of these phosphatases decreases during capacitation. This decline in phosphatase activities and the subsequent increase in threonine phosphorylation may be an important
Ferrante, J.
1972-01-01
Equilibrium surface segregation of aluminum in a copper-10-atomic-percent-aluminum single crystal alloy oriented in the /111/ direction was demonstrated by using Auger electron spectroscopy. This crystal was in the solid solution range of composition. Equilibrium surface segregation was verified by observing that the aluminum surface concentration varied reversibly with temperature in the range 550 to 850 K. These results were curve fitted to an expression for equilibrium grain boundary segregation and gave a retrieval energy of 5780 J/mole (1380 cal/mole) and a maximum frozen-in surface coverage three times the bulk layer concentration. Analyses concerning the relative merits of sputtering calibration and the effects of evaporation are also included.
Hyperglycemia of Diabetic Rats Decreased by a Glucagon Receptor Antagonist
Johnson, David G.; Ulichny Goebel, Camy; Hruby, Victor J.; Bregman, Marvin D.; Trivedi, Dev
1982-02-01
The glucagon analog [l-Nα-trinitrophenylhistidine, 12-homoarginine]-glucagon (THG) was examined for its ability to lower blood glucose concentrations in rats made diabetic with streptozotocin. In vitro, THG is a potent antagonist of glucagon activation of the hepatic adenylate cyclase assay system. Intravenous bolus injections of THG caused rapid decreases (20 to 35 percent) of short duration in blood glucose. Continuous infusion of low concentrations of the inhibitor led to larger sustained decreases in blood glucose (30 to 65 percent). These studies demonstrate that a glucagon receptor antagonist can substantially reduce blood glucose levels in diabetic animals without addition of exogenous insulin.
Single-cell concepts for obtaining photovoltaic conversion efficiency over 30 percent
Fan, John C. C.
1985-01-01
Although solar photovoltaic conversion efficiencies over 30 percent (one sun, AM1) can be expected for multiple-cell configurations using spectral splitting techniques, the highest practical single-cell conversion efficiency that can be attained using present concepts is estimated to be about 27-28 percent. To achieve conversion efficiencies above 30 percent using single-cell configurations it will be necessary to employ different concepts, such as spectral compression and broad-band detection. The implementation of these concepts would require major breakthroughs that are not anticipated in the near future.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
EnviroAtlas - Woodbine, IA - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
EnviroAtlas - New Bedford, MA - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
EnviroAtlas - Durham, NC - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
EnviroAtlas - Austin, TX - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
EnviroAtlas - Fresno, CA - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
EnviroAtlas - Paterson, NJ - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
EnviroAtlas - Portland, OR - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
EnviroAtlas - Memphis, TN - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
EnviroAtlas - Tampa, FL - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
EnviroAtlas - Milwaukee, WI - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
20 percent lower lung cancer mortality with low-dose CT vs chest X-ray
Scientists have found a 20 percent reduction in deaths from lung cancer among current or former heavy smokers who were screened with low-dose helical computed tomography (CT) versus those screened by chest X-ray.
EnviroAtlas - Cleveland, OH - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
EnviroAtlas - Green Bay, WI - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
EnviroAtlas Estimated Percent Tree Cover Along Walkable Roads Web Service
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
Map of percent scleractinian coral cover and sand along camera tow tracks in west Hawaii
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral and sand overlaid on bathymetry and landsat imagery northwest...
Map of percent scleractinian coral cover along camera tow tracks in west Hawaii
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry and landsat imagery northwest of...
EnviroAtlas - New York, NY - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
EnviroAtlas - Minneapolis/St. Paul, MN - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
Stellwagen Bank bathymetry - Percent slope derived from 5-meter bathymetric contour lines
National Oceanic and Atmospheric Administration, Department of Commerce — Percent slope of Stellwagen Bank bathymetry. Raster derived from 5-meter bathymetric contour lines (Quads 1-18). Collected on surveys carried out in 4 cruises 1994 -...
EnviroAtlas - Pittsburgh, PA - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
U.S. Geological Survey, Department of the Interior — This portion of the data release presents riparian plant species abundance (percent cover) data from plots sampled in the Elwha River estuary, Washington, in 2007...
EnviroAtlas - Phoenix, AZ - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
EnviroAtlas - Des Moines, IA - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
EnviroAtlas - Portland, ME - Estimated Percent Tree Cover Along Walkable Roads
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...
EnviroAtlas - Percent Urban Land Cover by 12-Digit HUC for the Conterminous United States
U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the percent urban land for each 12-digit hydrologic unit code (HUC) in the conterminous United States. For the purposes of this...
Sinopec＇s Net Profit Slumps 35.04 Percent in Q1
2012-01-01
Sinopec Corp., Asia＇s largest oil refiner, announced that its net profit slumped 35.04 percent year on year to 13.41 billion yuan （US$2.13 billion） in the first quarter amid rising operation costs and diminishing profit margins. Business earnings during the period dropped 28.99 percent year on year to 21.81 billion yuan, the company said in its quarterly report filed with the Shanghai Stock Exchange.
WHK Student Internship Enrollment, Mentor Participation Up More than 50 Percent | Poster
By Nancy Parrish, Staff Writer The Werner H. Kirsten Student Internship Program (WHK SIP) has enrolled the largest class ever for the 2013–2014 academic year, with 66 students and 50 mentors. This enrollment reflects a 53 percent increase in students and a 56 percent increase in mentors, compared to 2012–2013 (43 students and 32 mentors), according to Julie Hartman, WHK SIP director.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Microleakage of composite resin restorations with a 10 percent maleic acid etchant.
Gilpatrick, R O; Owens, B M; Kaplan, I; Cook, G
1996-04-01
Microleakage of Class V composite resin restorations with margins all in enamel were compared in this in-vitro study using Scotchbond MultiPurpose Adhesive (SMP) (3M Corp.), and Scotchbond II (SB II) (3M Corp). Twenty extracted human molars were randomly separated into two groups: Group One, which used the SMP system and Group Two, which used the SB II system. Circular Class V preparations were cut 1.8 mm deep and 3 mm in diameter using a #556 fissure bur. Cavosurface margins, all in enamel, were beveled. The enamel and dentin were treated following manufacturer's directions for each group, and a microfilled composite resin, Silux Plus (3M Corp), was applied in two hand-placed increments. All teeth were finished with Sof-Lex discs, stored in water for seven days, then thermocycled in a water bath for 100 cycles, alternating from 4 degrees C to 58 degrees C. The teeth were placed in a 5 percent solution of methylene blue, rinsed and then invested in resin. All teeth were sectioned vertically and horizontally and a ratio (percentage) of wall length to amount of leakage along each wall was established. The overall mean leakage of Group One was 15.27 percent and Group Two was 13.84 percent. Looking at individual walls, the mean occlusal wall leakage of Group One was 28.41 percent and Group Two was 12.45 percent. Mean gingival wall leakage of Group One was 15.96 percent and Group Two was 21.80 percent. Comparing the two groups, using a student's t test, there was no significant difference between the overall mean leakage or between the gingival wall leakage (p > 0.05); however, there was a significant difference between the occlusal wall leakage (p < 0.05), with SMP exhibiting more leakage.
Influence of maximum decking charge on intensity of blasting vibration
无
2006-01-01
Based on the character of short-time non-stationary random signal, the relationship between the maximum decking charge and energy distribution of blasting vibration signals was investigated by means of the wavelet packet method. Firstly, the characteristics of wavelet transform and wavelet packet analysis were described. Secondly, the blasting vibration signals were analyzed by wavelet packet based on software MATLAB, and the change of energy distribution curve at different frequency bands were obtained. Finally, the law of energy distribution of blasting vibration signals changing with the maximum decking charge was analyzed. The results show that with the increase of decking charge, the ratio of the energy of high frequency to total energy decreases, the dominant frequency bands of blasting vibration signals tend towards low frequency and blasting vibration does not depend on the maximum decking charge.
Body mass index and percent body fat: a meta analysis among different ethnic groups.
Deurenberg, P; Yap, M; van Staveren, W A
1998-12-01
To study the relationship between percent body fat and body mass index (BMI) in different ethnic groups and to evaluate the validity of the BMI cut-off points for obesity. Meta analysis of literature data. Populations of American Blacks, Caucasians, Chinese, Ethiopians, Indonesians, Polynesians and Thais. Mean values of BMI, percent body fat, gender and age were adapted from original papers. The relationship between percent body fat and BMI differs in the ethnic groups studied. For the same level of body fat, age and gender, American Blacks have a 1.3 kg/m2 and Polynesians a 4.5 kg/m2 lower BMI compared to Caucasians. By contrast, in Chinese, Ethiopians, Indonesians and Thais BMIs are 1.9, 4.6, 3.2 and 2.9 kg/m2 lower compared to Caucasians, respectively. Slight differences in the relationship between percent body fat and BMI of American Caucasians and European Caucasians were also found. The differences found in the body fat/BMI relationship in different ethnic groups could be due to differences in energy balance as well as to differences in body build. The results show that the relationship between percent body fat and BMI is different among different ethnic groups. This should have public health implications for the definitions of BMI cut-off points for obesity, which would need to be population-specific.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
WHY WE NEED 100 PERCENT RENEWABLE ENERGIES: A PLEA FOR THE ENERGIEWENDE
Christian Hinsch
2014-05-01
Full Text Available Those familiar with the fifth intergovernmental Penal on Climate Change report presented in late 2013 can no longer seriously doubt that climate change has become a reality. Although the issue has been the subject of several high profile international conferences, little has been achieved so far. Fossil power plants still continue to emit massive amounts of greenhouse gases further accelerating climate change. There is, however, an alternative to our current climate-damaging way of energy production: The complete transition towards 100 percent renewable energies. This paper examines the way in which an industrialized country like Germany can become a 100 percent renewable by 2020.
Temporal Decrease in Upper Atmospheric Chlorine
Froidevaux, L.; Livesey, N. J.; Read, W. G.; Salawitch, R. J.; Waters, J. W.; Drouin, B.; MacKenzie, I. A.; Pumphrey, H. C.; Bernath, P.; Boone, C.;
2006-01-01
We report a steady decrease in the upper stratospheric and lower mesospheric abundances of hydrogen chloride (HCl) from August 2004 through January 2006, as measured by the Microwave Limb Sounder (MLS) aboard the Aura satellite. For 60(deg)S to 60(deg)N zonal means, the average yearly change in the 0.7 to 0.1 hPa (approx.50 to 65 km) region is -27 +/- 3 pptv/year, or -0.78 +/- 0.08 percent/year. This is consistent with surface abundance decrease rates (about 6 to 7 years earlier) in chlorine source gases. The MLS data confirm that international agreements to reduce global emissions of ozone-depleting industrial gases are leading to global decreases in the total gaseous chlorine burden. Tracking stratospheric HCl variations on a seasonal basis is now possible with MLS data. Inferred stratospheric total chlorine (CITOT) has a value of 3.60 ppbv at the beginning of 2006, with a (2-sigma) accuracy estimate of 7%; the stratospheric chlorine loading has decreased by about 43 pptv in the 18-month period studied here. We discuss the MLS HCl measurements in the context of other satellite-based HCl data, as well as expectations from surface chlorine data. A mean age of air of approx. 5.5 years and an age spectrum width of 2 years or less provide a fairly good fit to the ensemble of measurements.
Achieving Maximum Power from Thermoelectric Generators with Maximum-Power-Point-Tracking Circuits Composed of a Boost-Cascaded-with-Buck Converter
Park, Hyunbin; Sim, Minseob; Kim, Shiho
2015-06-01
We propose a way of achieving maximum power and power-transfer efficiency from thermoelectric generators by optimized selection of maximum-power-point-tracking (MPPT) circuits composed of a boost-cascaded-with-buck converter. We investigated the effect of switch resistance on the MPPT performance of thermoelectric generators. The on-resistances of the switches affect the decrease in the conversion gain and reduce the maximum output power obtainable. Although the incremental values of the switch resistances are small, the resulting difference in the maximum duty ratio between the input and output powers is significant. For an MPPT controller composed of a boost converter with a practical nonideal switch, we need to monitor the output power instead of the input power to track the maximum power point of the thermoelectric generator. We provide a design strategy for MPPT controllers by considering the compromise in which a decrease in switch resistance causes an increase in the parasitic capacitance of the switch.
Boyte, Stephen P.; Wylie, Bruce K.; Major, Donald J.
2016-01-01
Cheatgrass (Bromus tectorum L.) is a highly invasive species in the Northern Great Basin that helps decrease fire return intervals. Fire fragments the shrub steppe and reduces its capacity to provide forage for livestock and wildlife and habitat critical to sagebrush obligates. Of particular interest is the greater sage grouse (Centrocercus urophasianus), an obligate whose populations have declined so severely due, in part, to increases in cheatgrass and fires that it was considered for inclusion as an endangered species. Remote sensing technologies and satellite archives help scientists monitor terrestrial vegetation globally, including cheatgrass in the Northern Great Basin. Along with geospatial analysis and advanced spatial modeling, these data and technologies can identify areas susceptible to increased cheatgrass cover and compare these with greater sage grouse priority areas for conservation (PAC). Future climate models forecast a warmer and wetter climate for the Northern Great Basin, which likely will force changing cheatgrass dynamics. Therefore, we examine potential climate-caused changes to cheatgrass. Our results indicate that future cheatgrass percent cover will remain stable over more than 80% of the study area when compared with recent estimates, and higher overall cheatgrass cover will occur with slightly more spatial variability. The land area projected to increase or decrease in cheatgrass cover equals 18% and 1%, respectively, making an increase in fire disturbances in greater sage grouse habitat likely. Relative susceptibility measures, created by integrating cheatgrass percent cover and temporal standard deviation datasets, show that potential increases in future cheatgrass cover match future projections. This discovery indicates that some greater sage grouse PACs for conservation could be at heightened risk of fire disturbance. Multiple factors will affect future cheatgrass cover including changes in precipitation timing and totals and
European Community Can Reduce CO2 Emissions by Sixty Percent : A Feasibility Study
Mot, E.; Bartelds, H.; Esser, P.M.; Huurdeman, A.J.M.; Laak, P.J.A. van de; Michon, S.G.L.; Nielen, R.J.; Baar, H.J.W. de
1993-01-01
Carbon dioxide (CO2) emissions in the European Community (EC) can be reduced by roughly 60 percent. A great many measures need to be taken to reach this reduction, with a total annual cost of ECU 55 milliard. Fossil fuel use is the main cause of CO2 emissions into the atmosphere; CO2 emissions are t
Five Percent Post Survey Check Of National Family Health Survey (NFHS In ORISSA
Kumar Benera Sudhir
1999-01-01
Full Text Available Research questions: How well a post survey sample check of NFHS correlates with the findings of NFHS? Objective: Post survey check of National Family Health Survey carried out in 1992-93. Study design: Multistage sampling method with 5 percent sample of original NFHS sample. Setting: Study covered 5 percent sample of original NFHS sample. Subjects: Five percent household sample (1093 members of original NFHS sample was studied and compared with NFHS data. Method: Information from five percent house-holds of NFHS in which either there likely to be no change was likely to be only in one direction such as age group, sex-ratio, literacy, family planning knowledge and adoption etc. were collected in a predesigned questionnaire and compared with NFHS data. Results: The demographic characteristics were similar to those of NFHS. TFR and number of children ever borne were also found to be same. The awareness of FP methods and its uses were within acceptable margin of error. Thus on comparison of data of post survey check and NFHS sample error was within acceptable margin.
After-Tax Profit of Kenya Airways for 2010-11 Financial Year Increases 73 Percent
无
2011-01-01
Kenya Airways is the pride of the whole African continent.Recently,Kenya Airways announced its after-tax profits for the 2010-11 fiscal yearincreased 73 percent.The airline’s CEO and General Manager Titus Naikuni attributes the greatest part of the
13 CFR 107.1410 - Requirement to redeem 4 percent Preferred Securities.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Requirement to redeem 4 percent Preferred Securities. 107.1410 Section 107.1410 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES SBA Financial Assistance for Licensees (Leverage)...
13 CFR 107.1400 - Dividends or partnership distributions on 4 percent Preferred Securities.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Dividends or partnership distributions on 4 percent Preferred Securities. 107.1400 Section 107.1400 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES SBA Financial Assistance for...
13 CFR 107.1420 - Articles requirements for 4 percent Preferred Securities.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Articles requirements for 4 percent Preferred Securities. 107.1420 Section 107.1420 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES SBA Financial Assistance for Licensees (Leverage)...
Bonjour, Jessica L.; Pitzer, Joy M.; Frost, John A.
2015-01-01
Mole to gram conversions, density, and percent composition are fundamental concepts in first year chemistry at the high school or undergraduate level; however, students often find it difficult to engage with these concepts. We present a simple laboratory experiment utilizing portable nuclear magnetic resonance spectroscopy (NMR) to determine the…
Karimi, Hamid; Jones, Mark; O'Brian, Sue; Onslow, Mark
2014-01-01
Background: At present, percent syllables stuttered (%SS) is the gold standard outcome measure for behavioural stuttering treatment research. However, ordinal severity rating (SR) procedures have some inherent advantages over that method. Aims: To establish the relationship between Clinician %SS, Clinician SR and self-reported Speaker SR. To…
Generalized equations for estimating DXA percent fat of diverse young women and men: The Tiger Study
Popular generalized equations for estimating percent body fat (BF%) developed with cross-sectional data are biased when applied to racially/ethnically diverse populations. We developed accurate anthropometric models to estimate dual-energy x-ray absorptiometry BF% (DXA-BF%) that can be generalized t...
Field method to measure changes in percent body fat of young women: The TIGER Study
Body mass index (BMI), waist (W) and hip (H) circumference (C) are commonly used to assess changes in body composition for field research. We developed a model to estimate changes in dual energy X-ray absorption (DXA) percent fat (% fat) from these variables with a diverse sample of young women fro...
PETROCHINA'S OIL AND GAS PRODUCTION GROWS 5.3 PERCENT IN FIRST THREE QUARTERS
无
2005-01-01
@@ PetroChina announced its business results of the first three quarters of 2005 in mid-October. Based on the statistical figures made available from China's No. 1 oil producer, the January-September oil and gas production targets rose 5.3 percent as compared to the same period of the previous year.
Ramful, Ajay; Bedgood, Danny; Lowrie, Thomas
2016-01-01
This paper is the outcome of a collaborative endeavour between mathematics and science educators where the insight from each field mutually informed one another. Specifically, building on the knowledge base from mathematics education research, this study analyses the ways in which percent is interpreted by first year university students in general…
The Determination of the Percent of Oxygen in Air Using a Gas Pressure Sensor
Gordon, James; Chancey, Katherine
2005-01-01
The experiment of determination of the percent of oxygen in air is performed in a general chemistry laboratory in which students compare the results calculated from the pressure measurements obtained with the calculator-based systems to those obtained in a water-measurement method. This experiment allows students to explore a fundamental reaction…
New Twists Mark the Debate over Texas' Top 10-Percent Plan
Schmidt, Peter
2008-01-01
Born out of one legal battle over affirmative action, the Texas college-admissions policy known as the "top 10 percent plan" is now at the center of another. The University of Texas at Austin is being challenged in U.S. District Court over its 2004 decision to return to using race-conscious admissions criteria after years without them.…
5 CFR 2636.304 - The 15 percent limitation on outside earned income.
2010-01-01
... ETHICS LIMITATIONS ON OUTSIDE EARNED INCOME, EMPLOYMENT AND AFFILIATIONS FOR CERTAIN NONCAREER EMPLOYEES Outside Earned Income Limitation and Employment and Affiliation Restrictions Applicable to Certain... calendar year which exceeds 15 percent of the annual rate of basic pay for level II of the...
26 CFR 1.382-3 - Definitions and rules relating to a 5-percent shareholder.
2010-04-01
... that, instead of an investment advisor recommending that clients purchase L stock, the trustee of... 26 Internal Revenue 4 2010-04-01 2010-04-01 false Definitions and rules relating to a 5-percent shareholder. 1.382-3 Section 1.382-3 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE...
Radial growth and percent of latewood in Scots pine provenance trials in Western and Central Siberia
S. R. Kuzmin
2016-12-01
Full Text Available Percent of latewood of Boguchany and Suzun Scots pine climatypes has been studied in two provenance trials (place of origin and trial place. For Boguchany climatype the place of origin is south taiga of Central Siberia (Krasnoyarsk Krai, the place of trial is forest-steppe zone of Western Siberia (Novosibirsk Oblast and vice versa for Suzun climatype – forest-steppe zone of Western Siberia is the place of origin, south taiga is the place of trial. Comparison of annual average values of latewood percent of Boguchany climatype in south taiga and forest-steppe revealed the same numbers – 19 %. Annual variability of this trait in south taiga is distinctly lower and equal to 17 %, in forest-steppe – 35 %. Average annual values of latewood percent of Suzun climatype in the place of origin and trial place are close (20 and 21 %. Variability of this trait for Suzun climatype is higher than for Boguchany and equal to 23 % in south taiga and 42 % in forest-steppe. Climatic conditions in southern taiga in Central Siberia in comparison with forest-steppe in Western Siberia make differences between climatypes stronger. Differences between climatypes are expressed in different age of maximal increments of diameter, different tree ring width and latewood percent values and in different latewood reaction to weather conditions.
Identification of a novel percent mammographic density locus at 12q24.
Stevens, Kristen N; Lindstrom, Sara; Scott, Christopher G; Thompson, Deborah; Sellers, Thomas A; Wang, Xianshu; Wang, Alice; Atkinson, Elizabeth; Rider, David N; Eckel-Passow, Jeanette E; Varghese, Jajini S; Audley, Tina; Brown, Judith; Leyland, Jean; Luben, Robert N; Warren, Ruth M L; Loos, Ruth J F; Wareham, Nicholas J; Li, Jingmei; Hall, Per; Liu, Jianjun; Eriksson, Louise; Czene, Kamila; Olson, Janet E; Pankratz, V Shane; Fredericksen, Zachary; Diasio, Robert B; Lee, Adam M; Heit, John A; DeAndrade, Mariza; Goode, Ellen L; Vierkant, Robert A; Cunningham, Julie M; Armasu, Sebastian M; Weinshilboum, Richard; Fridley, Brooke L; Batzler, Anthony; Ingle, James N; Boyd, Norman F; Paterson, Andrew D; Rommens, Johanna; Martin, Lisa J; Hopper, John L; Southey, Melissa C; Stone, Jennifer; Apicella, Carmel; Kraft, Peter; Hankinson, Susan E; Hazra, Aditi; Hunter, David J; Easton, Douglas F; Couch, Fergus J; Tamimi, Rulla M; Vachon, Celine M
2012-07-15
Percent mammographic density adjusted for age and body mass index (BMI) is one of the strongest risk factors for breast cancer and has a heritable component that remains largely unidentified. We performed a three-stage genome-wide association study (GWAS) of percent mammographic density to identify novel genetic loci associated with this trait. In stage 1, we combined three GWASs of percent density comprised of 1241 women from studies at the Mayo Clinic and identified the top 48 loci (99 single nucleotide polymorphisms). We attempted replication of these loci in 7018 women from seven additional studies (stage 2). The meta-analysis of stage 1 and 2 data identified a novel locus, rs1265507 on 12q24, associated with percent density, adjusting for age and BMI (P = 4.43 × 10(-8)). We refined the 12q24 locus with 459 additional variants (stage 3) in a combined analysis of all three stages (n = 10 377) and confirmed that rs1265507 has the strongest association in the 12q24 region (P = 1.03 × 10(-8)). Rs1265507 is located between the genes TBX5 and TBX3, which are members of the phylogenetically conserved T-box gene family and encode transcription factors involved in developmental regulation. Understanding the mechanism underlying this association will provide insight into the genetics of breast tissue composition.
Allafi, Ahmad R.
Nylon 6 organoclay nanocomposites were prepared by melt processing using a twin screw extruder. Five different films were produced with five different % loadings (0, 2, 4, 6, and 8%). This study had three main objectives. The first was to investigate the effects of loading percentages on the barrier, thermal and mechanical properties of nylon 6 nanocomposite materials. The second was to study the effects of 0, 50 and 80% RH on the oxygen permeation of the nylon 6/nanocomposite films. The third was to investigate the properties of nylon 6 nanocomposite materials exposed to typical food processing conditions. These films were tested for their permeabilities to oxygen (OTR), carbon dioxide (CO2TR), and water vapor (WVTR). Thermal properties testing on the samples included differential scanning calorimetry (DSC) and dynamic mechanical analysis (DMA). Tensile strength at break, tensile modulus at break, and the percent elongation for the five films were examined using an INSTRON tester. Transmission electron microscopy (TEM) was used to investigate the morphology of the five films. Results showed that all gas barriers significantly increased with percent loading but there were no significant differences (P>0.05) between the 6 and 8% for the CO2TR. For the DMA, the storage modulus also significantly increased (P<0.05) with increasing loading except between the 2 and 4% concentrations. For the DSC analyses, enthalpy of fusion decreased slightly from an average of 39 J/g (control) to 32J/g (8% loading). The melt temperature also decreased from 227 to 222°C between those loadings. High pressure processed samples had the highest barrier against oxygen permeation when compared with the retorted and controls. Retorting seemed to reduce the tensile strength slightly; however, no significant changes in modulus and elongation occurred after retorting and HPP. These results showed that increasing percent loadings increased the stiffness of the material at the expense of its
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Predicting fat percent by skinfolds in racial groups: Durnin and Womersley revisited.
Davidson, Lance E; Wang, Jack; Thornton, John C; Kaleem, Zafar; Silva-Palacios, Federico; Pierson, Richard N; Heymsfield, Steven B; Gallagher, Dympna
2011-03-01
Despite their widespread use in research and fitness settings, Durnin and Womersley's (DW) 1974 prediction equations using skinfold thickness to estimate body fat percent by hydrodensitometry have not been systematically evaluated in racial or ethnic groups using body fat percent measured by dual-energy x-ray absorptiometry (%BF(DXA)) as the standard. This cross-sectional, population-based study examined whether the DW skinfold equations predict %BF(DXA) in a large, multiracial sample. Four skinfold measures (biceps, triceps, subscapular, and suprailiac), other clinical anthropometrics, and %BF(DXA) were obtained from 1675 healthy adults, age 18-110 yr, who were classified into four racial or ethnic categories: Caucasian, African American, Hispanic, or Asian. Predicted body fat percent using DW equations was compared with %BF(DXA) and evaluated within race/ethnicity- and sex-specific groups. Mean body fat percent predicted by DW equations was significantly different from %BF(DXA) in four of eight race/ethnicity- and sex-specific groups, particularly in Asian women and African American men (3.3 and 2.4 percentage point overestimates, respectively, P < 0.0001). New linear regression equations were developed estimating %BF(DXA) specific to each race/ethnicity and sex group, using the original DW skinfold sites. Body weight, height, and waist circumference independently predicted fat percent and were also included in the new equations. The 1974 DW equations did not predict %BF(DXA) uniformly in all races or ethnicities. Using %BF(DXA) as the criterion measure, the original DW skinfold equations have been updated specific to sex and race/ethnicity while maintaining the DW options for a minimalistic model using fewer predictors.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
2010-07-01
... curtain incinerators that burn 100 percent yard waste? 60.1450 Section 60.1450 Protection of Environment... Air Curtain Incinerators That Burn 100 Percent Yard Waste § 60.1450 How must I monitor opacity for air curtain incinerators that burn 100 percent yard waste? (a) Use EPA Reference Method 9 in appendix A...
2010-07-01
... curtain incinerators that burn 100 percent yard waste? 60.1445 Section 60.1445 Protection of Environment... Air Curtain Incinerators That Burn 100 Percent Yard Waste § 60.1445 What are the emission limits for air curtain incinerators that burn 100 percent yard waste? If your air curtain incinerator...
2010-07-01
... curtain incinerators that burn 100 percent yard waste? 60.1920 Section 60.1920 Protection of Environment... or Before August 30, 1999 Model Rule-Air Curtain Incinerators That Burn 100 Percent Yard Waste § 60.1920 What are the emission limits for air curtain incinerators that burn 100 percent yard waste?...
2010-07-01
... curtain incinerators that burn 100 percent yard waste? 62.15375 Section 62.15375 Protection of Environment... Combustion Units Constructed on or Before August 30, 1999 Air Curtain Incinerators That Burn 100 Percent Yard Waste § 62.15375 What are the emission limits for air curtain incinerators that burn 100 percent...
2010-07-01
... curtain incinerators that burn 100 percent yard waste? 60.1925 Section 60.1925 Protection of Environment... or Before August 30, 1999 Model Rule-Air Curtain Incinerators That Burn 100 Percent Yard Waste § 60.1925 How must I monitor opacity for air curtain incinerators that burn 100 percent yard waste? (a)...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Amazing 7-day, super-simple, scripted guide to teaching or learning percents
Hernandez, Lisa
2014-01-01
Welcome to The Amazing 7-Day, Super-Simple, Scripted Guide to Teaching or Learning Percents. I have attempted to do just what the title says: make learning percents super simple. I have also attempted to make it fun and even ear-catching. The reason for this is not that I am a frustrated stand-up comic, but because in my fourteen years of teaching the subject, I have come to realize that my jokes, even the bad ones, have a crazy way of sticking in my students' heads. And should I use a joke (even a bad one) repetitively, the associations become embedded in their brains, many times to their cha
Austrian Business Cycle Theory: Are 100 Percent Reserves Sufficient to Prevent a Business Cycle?
Philipp Bagus
2010-02-01
Full Text Available Authors in the Austrian tradition have made the credit expansion of a fractional reserve banking system as the prime cause of business cycles. Authors such as Selgin (1988 and White (1999 have argued that a solution to this problem would be a free banking system. They maintain that the competition between banks would limit the credit expansion effectively. Other authors such as Rothbard (1991 and Huerta de Soto (2006 have gone further and advocated a 100 percent reserve banking system ruling out credit expansion altogether. In this article it is argued that a 100 percent reserve system can still bring about business cycles through excessive maturity mismatching between deposits and loans.
Prediction of upper flammability limit percent of pure compounds from their molecular structures.
Gharagheizi, Farhad
2009-08-15
In this study, a quantitative structure-property relationship (QSPR) is presented to predict the upper flammability limit percent (UFLP) of pure compounds. The obtained model is a five parameters multi-linear equation. The parameters of the model are calculated only from chemical structure. The average absolute error and squared correlation coefficient of the obtained model over all 865 pure compounds used to develop the model are 9.7%, and 0.92, respectively.
A 20 GHz, 70 watt, 48 percent efficient space communications TWT
McDermott, M. A.; Tamashiro, R. N.
A space qualifiable helix traveling wave tube capable of producing saturated output power levels above 70 watts at 48 percent total efficiency has been developed for 20 GHz satellite communications systems. The design approach stresses high reliability consistent with high power and efficiency. Advanced construction features incorporated into the design are a five stage collector, an M-type dispenser cathode, and a dynamic velocity tapered (DVT) helix.
Some Weeds Community Percent in Response to Pumice Application on Soil under Water Stress Conditions
Davoud Zarehaghi
2016-02-01
Full Text Available A factorial experiment (using RCBD design with three replications was conducted in 2014 at the University of Tabriz-Iran, in order to determine the effects of pumice application (P1, P2, P3 and P4: control, 30, 60 and 90 tons per ha on soil and water stress (I1, I2 and I3: 100%, 70% and 50% water requirement calculated from class A pan, respectively on dominante weeds community percent. Results showed that community percent of weed species changed as a result of water stress and pumice application on soil. Distributions of Chenopodium album and Malva sylvestris were sensitive to water stress but, Amaranthus retroflexus and Solanum nigrum were neutral to water stress. In contrast, Amaranthus retroflexus, Cardaria draba, Setaria viridis, Sisymbrium irio, Xanthium strumarium, Convolvulus arvensis and Salsola rigida distribution were resistant to water stress. Community percent of Chenopodium album as sensitive species to water stress and Salsola rigida as resistance species to water stress positively affected by pumice application especially under water stress condition. Amaranthus retroflexus, Xanthium strumarium and Convolvulus arvensis were positively affected by pumice application under well and limited water supply conditions. In contrast, Cardaria draba, Sisymbrium irio and Solanum nigrum negatively affected by pumice under water stress and it had positive effect on community of these species under well watering conditions. Thus, application of pumice and water stress are two factors which change weed community precent.
Ultrasonic methods for measuring liquid viscosity and volume percent of solids
Sheen, S.H.; Chien, H.T.; Raptis, A.C.
1997-02-01
This report describes two ultrasonic techniques under development at Argonne National Laboratory (ANL) in support of the tank-waste transport effort undertaken by the U.S. Department of Energy in treating low-level nuclear waste. The techniques are intended to provide continuous on-line measurements of waste viscosity and volume percent of solids in a waste transport line. The ultrasonic technique being developed for waste-viscosity measurement is based on the patented ANL viscometer. Focus of the viscometer development in this project is on improving measurement accuracy, stability, and range, particularly in the low-viscosity range (<30 cP). A prototype instrument has been designed and tested in the laboratory. Better than 1% accuracy in liquid density measurement can be obtained by using either a polyetherimide or polystyrene wedge. To measure low viscosities, a thin-wedge design has been developed and shows good sensitivity down to 5 cP. The technique for measuring volume percent of solids is based on ultrasonic wave scattering and phase velocity variation. This report covers a survey of multiple scattering theories and other phenomenological approaches. A theoretical model leading to development of an ultrasonic instrument for measuring volume percent of solids is proposed, and preliminary measurement data are presented.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Maximum tunneling velocities in symmetric double well potentials
Manz, Jörn [State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Laser Spectroscopy, Shanxi University, 92, Wucheng Road, Taiyuan 030006 (China); Institut für Chemie und Biochemie, Freie Universität Berlin, Takustr. 3, 14195 Berlin (Germany); Schild, Axel [Institut für Chemie und Biochemie, Freie Universität Berlin, Takustr. 3, 14195 Berlin (Germany); Schmidt, Burkhard, E-mail: burkhard.schmidt@fu-berlin.de [Institut für Mathematik, Freie Universität Berlin, Arnimallee 6, 14195 Berlin (Germany); Yang, Yonggang, E-mail: ygyang@sxu.edu.cn [State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Laser Spectroscopy, Shanxi University, 92, Wucheng Road, Taiyuan 030006 (China)
2014-10-17
Highlights: • Coherent tunneling in one-dimensional symmetric double well potentials. • Potentials for analytical estimates in the deep tunneling regime. • Maximum velocities scale as the square root of the ratio of barrier height and mass. • In chemical physics maximum tunneling velocities are in the order of a few km/s. - Abstract: We consider coherent tunneling of one-dimensional model systems in non-cyclic or cyclic symmetric double well potentials. Generic potentials are constructed which allow for analytical estimates of the quantum dynamics in the non-relativistic deep tunneling regime, in terms of the tunneling distance, barrier height and mass (or moment of inertia). For cyclic systems, the results may be scaled to agree well with periodic potentials for which semi-analytical results in terms of Mathieu functions exist. Starting from a wavepacket which is initially localized in one of the potential wells, the subsequent periodic tunneling is associated with tunneling velocities. These velocities (or angular velocities) are evaluated as the ratio of the flux densities versus the probability densities. The maximum velocities are found under the top of the barrier where they scale as the square root of the ratio of barrier height and mass (or moment of inertia), independent of the tunneling distance. They are applied exemplarily to several prototypical molecular models of non-cyclic and cyclic tunneling, including ammonia inversion, Cope rearrangement of semibullvalene, torsions of molecular fragments, and rotational tunneling in strong laser fields. Typical maximum velocities and angular velocities are in the order of a few km/s and from 10 to 100 THz for our non-cyclic and cyclic systems, respectively, much faster than time-averaged velocities. Even for the more extreme case of an electron tunneling through a barrier of height of one Hartree, the velocity is only about one percent of the speed of light. Estimates of the corresponding time scales for
Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)
2009-02-19
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
An experimental investigation of two 15 percent-scale wind tunnel fan-blade designs
Signor, David B.
1988-01-01
An experimental 3-D investigation of two fan-blade designs was conducted. The fan blades tested were 15 percent-scale models of blades to be used in the fan drive of the National Full-Scale Aerodynamic Complex at NASA Ames Research Center. NACA 65- and modified NACA 65-series sections incorporated increased thickness on the upper surface, between the leading edge and the one-half-chord position. Twist and taper were the same for both blade designs. The fan blades with modified 65-series sections were found to have an increased stall margin when they were compared with the unmodified blades.
George Miliaresis
2009-04-01
Full Text Available The U.S National Elevation Dataset and the NLCD 2001 landcover data were used to test the correlation between SRTM elevation values and the height of evergreen forest vegetation in the Klamath Mountains of California.Vegetation height estimates (SRTM-NED are valid only for the two out of eight (N, NE, E, SE, S, SW, W, NW geographic directions, due to NED and SRTM grid data misregistration. Penetration depths of SRTM radar were found to linearly correlate to tree percent canopy density.
Observations of ferroelastic switching by Raman spectroscopy in 18-percent ceria-stabilized zirconia
Bolon, Amy; Munoz Saldana, Juan; Gentleman, Molly
2011-03-01
Ferroelastic switching has been shown to be responsible for significant increases in the toughness of tetragonal zirconia ceramics. Observations of switching and measurements of coercive stress have generally been limited to TEM studies on large single crystals. In this study we show that it is possible to observe ferroelastic switching in 18 mole-percent ceria stabilized zirconia using polarized confocal Raman spectroscopy. Observations were made on bulk polycrystalline samples indented with a standard Vicker's indent and exhibited reorientation of crystal domains along the crack as well as near the crack tip. Coercive stress measurements were made by loading the samples uniaxially while making measurements of domain orientation.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Validation of a dual-cycle ergometer for exercise during 100 percent oxygen prebreathing
Wiegman, Janet F.; Ohlhausen, John H.; Webb, James T.; Pilmanis, Andrew A.
1992-01-01
A study has been designed to determine if exercise, while prebreathing 100 percent oxygen prior to decompression, can reduce the current resting-prebreathe time requirements for extravehicular activity and high altitude reconnaissance flight. For that study, a suitable exercise mode was required. Design considerations included space limitations, cost, pressure suit compatibility, ease and maintenance of calibration, accuracy of work output, and assurance that no significant mechanical advantage or disadvantage would be introduced into the system. In addition, the exercise device must enhance denitrogenation by incorporation of both upper and lower body musculature at high levels of oxygen consumption. The purpose of this paper is to describe the specially constructed, dual-cycle ergometer developed for simultaneous arm and leg exercise during prebreathing, and to compare maximal oxygen uptake obtained on the device to that obtained during leg-only cycle ergometry and treadmill testing. Results demonstrate the suitability of the dual-cycle ergometer as an appropriate tool for exercise research during 100 percent oxygen prebreathing.
Mahboobeh Mehmandoost
2014-11-01
Full Text Available Since human creation up to now, wood has been discussed as an important organic material, therefore its maintain and optimum usage is a considerable problem. From one hand, with due attention to condition of forest in Iran, using fast growing specie Paulownia provides new way in wood industries. But from other hand, this specie with low density has low strength. One of the suggested ways to increase density of this wood is its impregnation by resin and to compress it. In this research it is tried to increase the penetrability and impregnation of Paulownia by using urea formaldehyde resin at first pretreatment and then compression should be done. In order to perform this process, two variables pretreatment and compression percent were defined that each of them had two levels. The pretreatment was performed by NaCl and NaOH and 40, 50% compression. Totally, 72 samples were prepared and after producing the compressed wood, the absorption percent and mechanical properties were evaluated which included compression parallel to grain, modulus of rupture, modulus of elasticity in bending and impact strength. The results showed that the provided mechanical properties and pretreatments samples with NaCl had most values of these properties in 40 and 50% compression levels.
Validation of a dual-cycle ergometer for exercise during 100 percent oxygen prebreathing
Wiegman, Janet F.; Ohlhausen, John H.; Webb, James T.; Pilmanis, Andrew A.
1992-01-01
A study has been designed to determine if exercise, while prebreathing 100 percent oxygen prior to decompression, can reduce the current resting-prebreathe time requirements for extravehicular activity and high altitude reconnaissance flight. For that study, a suitable exercise mode was required. Design considerations included space limitations, cost, pressure suit compatibility, ease and maintenance of calibration, accuracy of work output, and assurance that no significant mechanical advantage or disadvantage would be introduced into the system. In addition, the exercise device must enhance denitrogenation by incorporation of both upper and lower body musculature at high levels of oxygen consumption. The purpose of this paper is to describe the specially constructed, dual-cycle ergometer developed for simultaneous arm and leg exercise during prebreathing, and to compare maximal oxygen uptake obtained on the device to that obtained during leg-only cycle ergometry and treadmill testing. Results demonstrate the suitability of the dual-cycle ergometer as an appropriate tool for exercise research during 100 percent oxygen prebreathing.
Schweitzer, M.
2002-05-31
The U.S. Department of Energy's (DOE's) Weatherization Assistance Program has been installing energy-efficiency measures in low-income houses for over 25 years, achieving savings exceeding 30 percent of natural gas used for space heating. Recently, as part of its Weatherization Plus initiative, the Weatherization Assistance Program adopted the goal of achieving 30 percent energy savings for all household energy usage. The expansion of the Weatherization Assistance Program to include electric baseload components such as lighting and refrigerators provides additional opportunities for saving energy and meeting this ambitious goal. This report documents an Oak Ridge National Laboratory study that examined the potential savings that could be achieved by installing various weatherization measures in different types of dwellings throughout the country. Three different definitions of savings are used: (1) reductions in pre-weatherization expenditures; (2) savings in the amount of energy consumed at the house site, regardless of fuel type (''site Btus''); and (3) savings in the total amount of energy consumed at the source (''source Btus''), which reflects the fact that each Btu* of electricity consumed at the household level requires approximately three Btus to produce at the generation source. In addition, the effects of weatherization efforts on carbon dioxide (CO{sub 2}) emissions are examined.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
Black, Nate; Nabokov, Vanessa; Vijayadeva, Vinutha; Novotny, Rachel
2011-11-01
Samoan women exhibit high rates of obesity, which can possibly be attenuated through diet and physical activity. Obesity, and body fatness in particular, is associated with increased risk for chronic diseases. Ancestry, physical activity, and dietary patterns have been associated with body composition. Using a cross-sectional design, the relative importance of proportion of Pacific Islander (PI) ancestry, level of physical activity, and macronutrients among healthy women in Honolulu, Hawai'i, ages 18 to 28 years was examined. All data were collected between January 2003 and December 2004. Percent body fat (%BF) was determined by whole body dual energy x-ray absorptiometry (DXA). Nutrient data were derived from a three-day food record. Means and standard deviations were computed for all variables of interest. Bivariate correlation analysis was used to determine correlates of %BF. Multiple regression analysis was used to determine relative contribution of variables significantly associated with %BF. Proportion of PI ancestry was significantly positively associated with %BF (P=0.0001). Physical activity level was significantly negatively associated with %BF (P=0.0006). Intervention to increase physical activity level of young Samoan women may be effective to decrease body fat and improve health. CRC-NIH grant: 0216.
Transition aerodynamics for 20-percent-scale VTOL unmanned aerial vehicle
Kjerstad, Kevin J.; Paulson, John W., Jr.
1993-01-01
An investigation was conducted in the Langley 14- by 22-Foot Subsonic Tunnel to establish a transition data base for an unmanned aerial vehicle utilizing a powered-lift ejector system and to evaluate alterations to the ejector system for improved vehicle performance. The model used in this investigation was a 20-percent-scale, blended-body, arrow-wing configuration with integrated twin rectangular ejectors. The test was conducted from hover through transition conditions with variations in angle of attack, angle of sideslip, free-stream dynamic pressure, nozzle pressure ratio, and model ground height. Force and moment data along with extensive surface pressure data were obtained. A laser velocimeter technique for measuring inlet flow velocities was demonstrated at a single flow condition, and also a low order panel method was successfully used to numerically simulate the ejector inlet flow.
Phased Acoustic Array Measurements of a 5.75 Percent Hybrid Wing Body Aircraft
Burnside, Nathan J.; Horne, William C.; Elmer, Kevin R.; Cheng, Rui; Brusniak, Leon
2016-01-01
Detailed acoustic measurements of the noise from the leading-edge Krueger flap of a 5.75 percent Hybrid Wing Body (HWB) aircraft model were recently acquired with a traversing phased microphone array in the AEDC NFAC (Arnold Engineering Development Complex, National Full Scale Aerodynamics Complex) 40- by 80-Foot Wind Tunnel at NASA Ames Research Center. The spatial resolution of the array was sufficient to distinguish between individual support brackets over the full-scale frequency range of 100 to 2875 Hertz. For conditions representative of landing and take-off configuration, the noise from the brackets dominated other sources near the leading edge. Inclusion of flight-like brackets for select conditions highlights the importance of including the correct number of leading-edge high-lift device brackets with sufficient scale and fidelity. These measurements will support the development of new predictive models.
Bill, R. C.
1972-01-01
Damage scar volume measurements taken from like metal fretting pairs combined with scanning electron microscopy observations showed that three sequentially operating mechanisms result in the fretting of titanium, Monel-400, and cobalt - 25-percent molybdenum. Initially, adhesion and plastic deformation of the surface played an important role. This was followed after a few hundred cycles by a fatigue mechanism which produced spall-like pits in the damage scar. Finally, a combination of oxidation and abrasion by debris particles became most significant. Damage scar measurements made on several elemental metals after 600,000 fretting cycles suggested that the ratio of oxide hardness to metal hardness was a measure of the susceptibility of a metal to progressive damage by fretting.
Aerodynamic performance of two fifteen-percent-scale wind-tunnel drive fan designs
Signor, D. B.; Borst, H. V.
1986-01-01
An experimental and analytical investigation of two fan blade designs was conducted. The fan blades tested were 15 percent scale models of the blades used in the National Full Scale Aerodynamic Complex fan drive at NASA Ames Research Center. The fan blades were composed of NACA-65 and modified NACA-65-series airfoil design sections. The blades with modified 65-series sections incorporated increased thickness on the upper surface, between the leading edge and the one-half chord position. Twist and taper were the same for both blade designs. The fan blades with modified 65-series sections were found to have an increase in stall margin when they were compared with the unmodified blades. The experimental performance data agreed favorably with theoretical calculations.
A fuzzy neural network model to forecast the percent cloud coverage and cloud top temperature maps
Y. Tulunay
2008-12-01
Full Text Available Atmospheric processes are highly nonlinear. A small group at the METU in Ankara has been working on a fuzzy data driven generic model of nonlinear processes. The model developed is called the Middle East Technical University Fuzzy Neural Network Model (METU-FNN-M. The METU-FNN-M consists of a Fuzzy Inference System (METU-FIS, a data driven Neural Network module (METU-FNN of one hidden layer and several neurons, and a mapping module, which employs the Bezier Surface Mapping technique. In this paper, the percent cloud coverage (%CC and cloud top temperatures (CTT are forecast one month ahead of time at 96 grid locations. The probable influence of cosmic rays and sunspot numbers on cloudiness is considered by using the METU-FNN-M.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
2010-10-01
... ENFORCEMENT SYSTEMS § 307.31 Federal financial participation at the 80 percent rate for computerized support... financial participation is available at the 80 percent rate to States, Territories and systems defined in 42... 45 Public Welfare 2 2010-10-01 2010-10-01 false Federal financial participation at the 80...
2010-04-01
... schedules; 30 percent occupancy by very-low income families. 884.116 Section 884.116 Housing and Urban... percent occupancy by very-low income families. (a) HUD will establish schedules of Income limits for determining whether families qualify as Low-Income Families and Very Low-Income Families. (b) In the...
Knechtle, Beat; Wirth, Andrea; Baumann, Barbara; Knechtle, Patrizia; Rosemann, Thomas
2010-01-01
We studied male and female nonprofessional Ironman triathletes to determine whether percent body fat, training, and/or previous race experience were associated with race performance. We used simple linear regression analysis, with total race time as the dependent variable, to investigate the relationship among athletes' percent body fat, average…
7 CFR 205.303 - Packaged products labeled “100 percent organic” or “organic.”
2010-01-01
... (CONTINUED) ORGANIC FOODS PRODUCTION ACT PROVISIONS NATIONAL ORGANIC PROGRAM Labels, Labeling, and Market Information § 205.303 Packaged products labeled “100 percent organic” or “organic.” (a) Agricultural products... product, the following: (1) The term, “100 percent organic” or “organic,” as applicable, to modify...
Pfennig, Brian W.; Schaefer, Amy K.
2011-01-01
A general chemistry laboratory experiment is described that introduces students to instrumental analysis using gas chromatography-mass spectrometry (GC-MS), while simultaneously reinforcing the concepts of mass percent and the calculation of atomic mass. Working in small groups, students use the GC to separate and quantify the percent composition…
Pfennig, Brian W.; Schaefer, Amy K.
2011-01-01
A general chemistry laboratory experiment is described that introduces students to instrumental analysis using gas chromatography-mass spectrometry (GC-MS), while simultaneously reinforcing the concepts of mass percent and the calculation of atomic mass. Working in small groups, students use the GC to separate and quantify the percent composition…
Breast percent density estimation from 3D reconstructed digital breast tomosynthesis images
Bakic, Predrag R.; Kontos, Despina; Carton, Ann-Katherine; Maidment, Andrew D. A.
2008-03-01
Breast density is an independent factor of breast cancer risk. In mammograms breast density is quantitatively measured as percent density (PD), the percentage of dense (non-fatty) tissue. To date, clinical estimates of PD have varied significantly, in part due to the projective nature of mammography. Digital breast tomosynthesis (DBT) is a 3D imaging modality in which cross-sectional images are reconstructed from a small number of projections acquired at different x-ray tube angles. Preliminary studies suggest that DBT is superior to mammography in tissue visualization, since superimposed anatomical structures present in mammograms are filtered out. We hypothesize that DBT could also provide a more accurate breast density estimation. In this paper, we propose to estimate PD from reconstructed DBT images using a semi-automated thresholding technique. Preprocessing is performed to exclude the image background and the area of the pectoral muscle. Threshold values are selected manually from a small number of reconstructed slices; a combination of these thresholds is applied to each slice throughout the entire reconstructed DBT volume. The proposed method was validated using images of women with recently detected abnormalities or with biopsy-proven cancers; only contralateral breasts were analyzed. The Pearson correlation and kappa coefficients between the breast density estimates from DBT and the corresponding digital mammogram indicate moderate agreement between the two modalities, comparable with our previous results from 2D DBT projections. Percent density appears to be a robust measure for breast density assessment in both 2D and 3D x-ray breast imaging modalities using thresholding.
Is percent seminoma associated with intraoperative morbidity during post-chemotherapy RPLND?
Russell, Christopher M; Sharma, Pranav; Agarwal, Gautum; Fisher, John S; Richard, George J; Spiess, Philippe E; Pow-Sang, Julio M; Poch, Michael A; Sexton, Wade J
2016-02-01
To evaluate whether varying degrees of seminomatous elements in the primary orchiectomy specimen would be predictive of patient morbidity during post-chemotherapy retroperitoneal lymph node dissection (PC-RPLND) since the desmoplastic reaction with seminoma is associated with increased intraoperative complexity. We retrospectively identified 127 patients who underwent PC-RPLND for residual retroperitoneal masses. Clinicodemographic, intraoperative, and 30 day postoperative outcomes were compared for patients with pure seminoma (SEM), mixed germ cell tumors (GCT) containing seminoma elements (NS+SEM), and tumors with no seminoma elements (NS). Multivariate logistic regression was used to determine independent predictors of intraoperative and postoperative 30 day complications. We excluded 19 patients who received chemotherapy prior to orchiectomy, 2 patients with primary extragonadal GCT, and 3 patients who underwent re-do RPLND, leaving 103 patients for analysis. Fourteen patients (13.6%) had SEM, 18 (17.5%) had NS+SEM, and 71 (68.9%) had only NS elements. SEM patients were older (p = 0.03), had more intraoperative blood loss (p = 0.03), and were more likely to have residual seminomatous components in their post-chemotherapy lymph node (LN) histology (p = 0.01). Percent seminoma in the orchiectomy specimen was an independent predictor of estimated blood loss > 1.5 liters (odds ratio: 1.04, 95% confidence interval: 1.01-1.07; p = 0.013) after adjusting for age, stage, IGCCC risk category, preop chemotherapy, number and largest LN removed, need for vascular or adjacent organ resection (including nephrectomy), and LN histology. Higher percentage of seminoma in the orchiectomy specimen is associated with increased estimated blood loss during PC-RPLND. Percent seminoma, therefore, may be a useful prognostic tool for appropriate pre-surgical planning prior to PC-RPLND.
Mesurolle, Benoît, E-mail: benoit.mesurolle@muhc.mcgill.ca; Ceccarelli, Joan; Karp, Igor; Sun, Simon; El-Khoury, Mona
2014-02-15
Objective: Active ingredients in antiperspirants – namely, aluminum-based complexes – can produce radiopaque particles on mammography, mimicking microcalcifications. The present study was designed to investigate whether the appearance of antiperspirant induced radiopaque particles observed on mammograms is dependent on the percentage of aluminum-based complexes in antiperspirants and/or on their mode of application. Methods: A total of 43 antiperspirants with aluminum-based complex percentages ranging between 16% and 25% were tested. Each antiperspirant was applied to a single use plastic shield and then placed on an ultrasound gel pad, simulating breast tissue. Two experiments were performed, comparing antiperspirants based on (1) their percentage of aluminum-based complexes (20 antiperspirants) and (2) their mode of applications (solid, gel, and roll-on) (26 antiperspirants). Two experienced, blinded radiologists read images produced in consensus and assessed the appearance of radiopaque particles based on their density and shape. Results: In experiment 1, there was no statistically significant association between the percent aluminum composition of invisible solid antiperspirants and the density or shape of the radiopaque particles (p-values > 0.05). In experiment 2, there was a statistically significant association between the shape of the radiopaque particles and the mode of application of the antiperspirant (p-value = 0.0015). Conclusions: Our study suggests that the mammographic appearance of the radiopaque antiperspirant particles is not related to their percent composition of aluminum complexes. However, their mode of application appears to influence the shape of radiopaque particles, solid antiperspirants mimicking microcalcifications the most and roll-on antiperspirants the least.
Maximum holding endurance time: Effects of load and load's center of gravity height.
Lee, Tzu-Hsien
2015-01-01
Manual holding task is a potential risk to the development of musculoskeletal injuries since it is prone to induce localized muscle fatigue. Maximum holding endurance time is a significant parameter for the design of manual holding task. This study aimed to examine the effects of load and load's COG height on maximum holding endurance time. Fifteen young and healthy males were recruited as participants. A factorial design was used to examine the effects of load and load's COG height on maximum holding endurance time. Four levels of load (15% , 30% , 45% and 60% of the participant's maximum holding capacity) and two levels of load's COG height in box (0 cm and 40 cm high from the handle position) were examined. Maximum holding endurance time decreased with increasing load and/or increasing load's COG height. The effect of load's COG height on maximum holding endurance time decreased with increasing load. Load, load's COG height, and the interaction of load and load's COG height significantly affected maximum holding endurance time. Practitioners should realize the effects of load, load's COG height, and the interaction of load and load's COG height on maximum holding endurance time when setting the working conditions of holding tasks.
Hirano, Masaharu [Tokyo Medical Coll. (Japan)
1998-11-01
Hypertrophic cardiomyopathy (HCM) is a cardiac disease, the basic pathology of which consists of a decrease in left ventricular dilation compliance due to uneven hypertrophy of the left ventricular wall. Magnetic resonance imaging (MRI) is useful in monitoring uneven parietal hypertrophy and kinetics in HCM patients. The present study was undertaken in 47 HCM patients who showed asymmetrical septal hypertrophy to determine if percent thickness can be an indicator of left ventricular local movement using cine MRI. Longest and shortest axis images were acquired by the ECG synchronization method using a 1.5 T MR imager. Cardiac function was analyzed based on longest axis cine images, and telediastolic and telesystolic parietal thickness were measured based on shorter axis cine images at the papillary muscle level. Parietal movement index and percent thickness were used as indicators of local parietal movement. The correlation between these indicators and parietal thickness was evaluated. The percent thickness changed at an earlier stage of hypertrophy than the parietal movement index, thus it is thought to be useful in detecting left ventricular parietal movement disorders at an early stage of HCM. (author)
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
BINDER DRAINAGE TEST FOR POROUS MIXTURES MADE BY VARYING THE MAXIMUM AGGREGATE SIZES
Hardiman Hardiman
2004-01-01
Full Text Available Binder drainage occurs with mixes of small aggregate surface area particularly porous asphalt. The binder drainage test, developed by the Transport Research Laboratory, UK, is commonly used to set an upper limit on the acceptable binder content for a porous mix. This paper presents the results of a laboratory investigation to determine the effects of different binder types on the binder drainage characteristics of porous mix made of various maximum aggregate sizes 20, 14 and 10 mm. Two types of binder were used, conventional 60/70 pen bitumen, and styrene butadiene styrene (SBS modified bitumen. The amount of binder lost through drainage after three hours at the maximum mixing temperature were measured in duplicate for mixes of different maximum sizes and binder contents. The maximum mixing temperature adopted depends on the types of binder used. The retained binder is plotted against the initial mixed binder content, together with the line of equality where the retained binder equals the mixed binder content. The results indicate the significant contribution of using SBS modified bitumen to increase the target bitumen binder content. Their significance is discussed in terms of target binder content, the critical binder content, the maximum mixed binder content and the maximum retained binder content values obtained from the binder drainage test. It was concluded that increasing maximum aggregate sizes decrease the maximum retained binder content, critical binder content, target binder content, maximum mixed binder content, and mixed content for both binders, but however for all mixtures, SBS is the highest.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Immigrants in the one percent: The national origin of top wealth owners
Keister, Lisa A.; Aronson, Brian
2017-01-01
Background Economic inequality in the United States is extreme, but little is known about the national origin of affluent households. Households in the top one percent by total wealth own vastly disproportionate quantities of household assets and have correspondingly high levels of economic, social, and political influence. The overrepresentation of white natives (i.e., those born in the U.S.) among high-wealth households is well-documented, but changing migration dynamics suggest that a growing portion of top households may be immigrants. Methods Because no single survey dataset contains top wealth holders and data about country of origin, this paper uses two publicly-available data sets: the Survey of Consumer Finances (SCF) and the Survey of Income and Program Participation (SIPP). Multiple imputation is used to impute country of birth from the SIPP into the SCF. Descriptive statistics are used to demonstrate reliability of the method, to estimate the prevalence of immigrants among top wealth holders, and to document patterns of asset ownership among affluent immigrants. Results Significant numbers of top wealth holders who are usually classified as white natives may be immigrants. Many top wealth holders appear to be European and Canadian immigrants, and increasing numbers of top wealth holders are likely from Asia and Latin America as well. Results suggest that of those in the top one percent of wealth holders, approximately 3% are European and Canadian immigrants, .5% are from Mexico or Cuban, and 1.7% are from Asia (especially Hong Kong, Taiwan, Mainland China, and India). Ownership of key assets varies considerably across affluent immigrant groups. Conclusion Although the percentage of top wealth holders who are immigrants is relatively small, these percentages represent large numbers of households with considerable resources and corresponding social and political influence. Evidence that the propensity to allocate wealth to real and financial assets varies
BIHOURLY DIAGRAMS OF FORBUSH DECREASES
Bihourly diagrams were made of Forbush decreases of cosmic ray intensity as observed at Uppsala from 31 Aug 56 to 31 Dec 59, at Kiruna from Nov 56 to 31 Dec 59, and at Murchison Bay from 26 Aug 57 to 30 Apr 59. (Author)
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Valder, Joshua F.; Delzer, Gregory C.; Price, Curtis V.; Sandstrom, Mark W.
2008-01-01
recoveries from the quenched reagent spiked samples that were analyzed at two different times (day 0 and day 7 or 14) can be used to determine the stability of the quenched samples held for an amount of time representative of the normal amount of time between sample collection and analysis. The comparison between the quenched reagent spiked samples and the LRSs can be used to determine if quenching samples adversely affects the analytical performance under controlled conditions. The field study began in 2004 and is continuing today (February 2008) to characterize the effect of quenching on field-matrix spike recoveries and to better understand the potential oxidation and transformation of 277 AOCs. Three types of samples were collected from 11 NAWQA Study Units across the Nation: (1) quenched finished-water samples (not spiked), (2) quenched finished-water spiked samples, and (3) nonquenched finished-water spiked samples. Percent recoveries of AOCs in quenched and nonquenched finished-water spiked samples collected during 2004-06 are presented. Comparisons of percent recoveries between quenched and nonquenched spiked samples can be used to show how quenching affects finished-water samples. A maximum of 6 surface-water and 7 ground-water quenched finished-water spiked samples paired with nonquenched finished-water spiked samples were analyzed. Analytical results for the field study are presented in two ways: (1) by surface-water supplies or ground-water supplies, and (2) by use (or source) group category for surface-water and ground-water supplies. Graphical representations of percent recoveries for the quenched and nonquenched finished-water spiked samples also are presented.
An Improved Maximum C/I Scheduling Algorithm Combined with HARQ
无
2003-01-01
It is well known that traffic in downlink will be much greater than that in uplink in 3 G and that beyond. High Speed Downlink Packet Access(HSDPA) is the solution to transmission for high-speed downlink packet service in UMTS, of which Maximum C/I scheduling is one of the important algorithms related to performance enhancement. An improved scheme, Thorough Maximum C/I scheduling algorithm, is presented in this article, in which every transmitted frame has the maximum C/I. The simulation results show that the new Maximum C/I scheme outperforms the conventional scheme in throughput performance and delay performance, and that the FER decreases faster as the maximum number of the retransmission increases.
Thangakani, A Mary; Kumar, Sandeep; Nagarajan, R; Velmurugan, D; Gromiha, M Michael
2014-07-15
Distinguishing between amyloid fibril-forming and amorphous β-aggregating aggregation-prone regions (APRs) in proteins and peptides is crucial for designing novel biomaterials and improved aggregation inhibitors for biotechnological and therapeutic purposes. Adjacent and alternate position residue pairs in hexapeptides show distinct preferences for occurrence in amyloid fibrils and amorphous β-aggregates. These observations were converted into energy potentials that were, in turn, machine learned. The resulting tool, called Generalized Aggregation Proneness (GAP), could successfully distinguish between amyloid fibril-forming and amorphous β-aggregating hexapeptides with almost 100 percent accuracies in validation tests performed using non-redundant datasets. Accuracies of the predictions made by GAP are significantly improved compared with other methods capable of predicting either general β-aggregation or amyloid fibril-forming APRs. This work demonstrates that amino acid side chains play important roles in determining the morphological fate of β-mediated aggregates formed by short peptides. http://www.iitm.ac.in/bioinfo/GAP/. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Riachi, Marc; Himms-Hagen, Jean; Harper, Mary-Ellen
2004-12-01
Indirect calorimetry is commonly used in research and clinical settings to assess characteristics of energy expenditure. Respiration chambers in indirect calorimetry allow measurements over long periods of time (e.g., hours to days) and thus the collection of large sets of data. Current methods of data analysis usually involve the extraction of only a selected small proportion of data, most commonly the data that reflects resting metabolic rate. Here, we describe a simple quantitative approach for the analysis of large data sets that is capable of detecting small differences in energy metabolism. We refer to it as the percent relative cumulative frequency (PRCF) approach and have applied it to the study of uncoupling protein-1 (UCP1) deficient and control mice. The approach involves sorting data in ascending order, calculating their cumulative frequency, and expressing the frequencies in the form of percentile curves. Results demonstrate the sensitivity of the PRCF approach for analyses of oxygen consumption (.VO2) as well as respiratory exchange ratio data. Statistical comparisons of PRCF curves are based on the 50th percentile values and curve slopes (H values). The application of the PRCF approach revealed that energy expenditure in UCP1-deficient mice housed and studied at room temperature (24 degrees C) is on average 10% lower (p calorimetry is increasingly used, and the PRCF approach provides a novel and powerful means for data analysis.
Percent body fat is a better predictor of cardiovascular risk factors than body mass index
Zeng, Qiang; Dong, Sheng-Yong; Sun, Xiao-Nan; Xie, Jing; Cui, Yi [International Medical Center, Chinese PLA General Hospital, Beijing (China)
2012-04-20
The objective of the present study was to evaluate the predictive values of percent body fat (PBF) and body mass index (BMI) for cardiovascular risk factors, especially when PBF and BMI are conflicting. BMI was calculated by the standard formula and PBF was determined by bioelectrical impedance analysis. A total of 3859 ambulatory adult Han Chinese subjects (2173 males and 1686 females, age range: 18-85 years) without a history of cardiovascular diseases were recruited from February to September 2009. Based on BMI and PBF, they were classified into group 1 (normal BMI and PBF, N = 1961), group 2 (normal BMI, but abnormal PBF, N = 381), group 3 (abnormal BMI, but normal PBF, N = 681), and group 4 (abnormal BMI and PBF, N = 836). When age, gender, lifestyle, and family history of obesity were adjusted, PBF, but not BMI, was correlated with blood glucose and lipid levels. The odds ratio (OR) and 95% confidence interval (CI) for cardiovascular risk factors in groups 2 and 4 were 1.88 (1.45-2.45) and 2.06 (1.26-3.35) times those in group 1, respectively, but remained unchanged in group 3 (OR = 1.32, 95%CI = 0.92-1.89). Logistic regression models also demonstrated that PBF, rather than BMI, was independently associated with cardiovascular risk factors. In conclusion, PBF, and not BMI, is independently associated with cardiovascular risk factors, indicating that PBF is a better predictor.
Brenzel, Logan; Schütte, Carl; Goguadze, Keti; Valdez, Werner; Le Gargasson, Jean-Bernard; Guthrie, Teresa
2016-02-01
Governments in resource-poor settings have traditionally relied on external donor support for immunization. Under the Global Vaccine Action Plan, adopted in 2014, countries have committed to mobilizing additional domestic resources for immunization. Data gaps make it difficult to map how well countries have done in spending government resources on immunization to demonstrate greater ownership of programs. This article presents findings of an innovative approach for financial mapping of routine immunization applied in Benin, Ghana, Honduras, Moldova, Uganda, and Zambia. This approach uses modified System of Health Accounts coding to evaluate data collected from national and subnational levels and from donor agencies. We found that government sources accounted for 27-95 percent of routine immunization financing in 2011, with countries that have higher gross national product per capita better able to finance requirements. Most financing is channeled through government agencies and used at the primary care level. Sustainable immunization programs will depend upon whether governments have the fiscal space to allocate additional resources. Ongoing robust analysis of routine immunization should be instituted within the context of total health expenditure tracking.
Brief communication: Body mass index, body adiposity index, and percent body fat in Asians.
Zhao, Dapeng; Li, Yonglan; Zheng, Lianbin; Yu, Keli
2013-10-01
Human obesity is a growing epidemic throughout the world. Body mass index (BMI) is commonly used as a good indicator of obesity. Body adiposity index (BAI = hip circumference (cm)/stature (m)(1.5) - 18), as a new surrogate measure, has been proposed recently as an alternative to BMI. This study, for the first time, compares BMI and BAI for predicting percent body fat (PBF; estimated from skinfolds) in a sample of 302 Buryat adults (148 men and 154 women) living in China. The BMI and BAI were strongly correlated with PBF in both men and women. The correlation coefficient between BMI and PBF was higher than that between BAI and PBF for both sexes. For the linear regression analysis, BMI better predicted PBF in both men and women; the variation around the regression lines for each sex was greater for BAI comparisons. For the receiver operating characteristic (ROC) analysis, the area under the ROC curve for BMI was higher than that for BAI for each sex, which suggests that the discriminatory capacity of the BMI is higher than the one of BAI. Taken together, we conclude that BMI is a more reliable indicator of PBF derived from skinfold thickness in adult Buryats. Copyright © 2013 Wiley Periodicals, Inc.
Bellen, Hugo J.; Levis, Robert W.; Liao, Guochun; He, Yuchun; Carlson, Joseph W.; Tsang, Garson; Evans-Holm, Martha; Hiesinger, P. Robin; Schulze, Karen L.; Rubin, Gerald M.; Hoskins, Roger A.; Spradling, Allan C.
2004-01-13
The Berkeley Drosophila Genome Project (BDGP) strives to disrupt each Drosophila gene by the insertion of a single transposable element. As part of this effort, transposons in more than 30,000 fly strains were localized and analyzed relative to predicted Drosophila gene structures. Approximately 6,300 lines that maximize genomic coverage were selected to be sent to the Bloomington Stock Center for public distribution, bringing the size of the BDGP gene disruption collection to 7,140 lines. It now includes individual lines predicted to disrupt 5,362 of the 13,666 currently annotated Drosophila genes (39 percent). Other lines contain an insertion at least 2 kb from others in the collection and likely mutate additional incompletely annotated or uncharacterized genes and chromosomal regulatory elements. The remaining strains contain insertions likely to disrupt alternative gene promoters or to allow gene mis-expression. The expanded BDGP gene disruption collection provides a public resource that will facilitate the application of Drosophila genetics to diverse biological problems. Finally, the project reveals new insight into how transposons interact with a eukaryotic genome and helps define optimal strategies for using insertional mutagenesis as a genomic tool.
Forbush decreases and particle acceleration in the outer heliosphere
Van Allen, J. A.; Mihalov, J. D.
1990-01-01
Consideration is given to Pioneer 10 and 11 observations of the solar flares that occurred during the period March 6-19, 1989. The observations shown that Forbush decreases propagate with an essentially constant magnitude to 47 AU and with similar magnitude at widely different ecliptic longitudes. The times of recovery from Forbush decreases become progressively greater as the radial distance increases. A scheme is proposed to explain this behavior, giving support to the hypothesis that the solar cycle modulation of the galactic cosmic ray intensity is attributable primarily to overlapping Forbush decreases that are more frequenct and of greater magnitude near times of maximum solar activity.
Maximum mass of a barotropic spherical star
Fujisawa, Atsuhito; Yoo, Chul-Moon; Nambu, Yasusada
2015-01-01
The ratio of total mass $M$ to surface radius $R$ of spherical perfect fluid ball has an upper bound, $M/R < B$. Buchdahl obtained $B = 4/9$ under the assumptions; non-increasing mass density in outward direction, and barotropic equation of states. Barraco and Hamity decreased the Buchdahl's bound to a lower value $B = 3/8$ $(< 4/9)$ by adding the dominant energy condition to Buchdahl's assumptions. In this paper, we further decrease the Barraco-Hamity's bound to $B \\simeq 0.3636403$ $(< 3/8)$ by adding the subluminal (slower-than-light) condition of sound speed. In our analysis, we solve numerically Tolman-Oppenheimer-Volkoff equations, and the mass-to-radius ratio is maximized by variation of mass, radius and pressure inside the fluid ball as functions of mass density.
New downshifted maximum in stimulated electromagnetic emission spectra
Sergeev, Evgeny; Grach, Savely
A new spectral maximum in spectra of stimulated electromagnetic emission of the ionosphere (SEE, [1]) was detected in experiments at the SURA facility in 2008 for the pump frequencies f0 4.4-4.5 MHz, most stably for f0 = 4.3 MHz, the lowest possible pump frequency at the SURA facility. The new maximum is situated at frequency shifts ∆f -6 kHz from the pump wave frequency f0 , ∆f = fSEE - f0 , somewhat closer to the f0 than the well known [2,3] Downshifted Maximum in the SEE spectrum at ∆f -9 kHz. The detection and detailed study of the new feature (which we tentatively called the New Downshifted Maximum, NDM) became possible due to high frequency resolution in spectral analysis. The following properties of the NDM are established. (i) The NDM appears in the SEE spectra simultaneously with the DM and UM features after the pump turn on (recall that the less intensive Upshifted Maximum, UM, is situated at ∆f +(6-8) kHz [2,3]). The NDM can't be attributed to 1 DM [4] or Narrow Continuum Maximum (NCM, 2 [5]) SEE features, as well as to splitted DM near gyroharmonics [2]. (ii) The NDM is observed as prominent feature for maximum pump power of the SURA facility P ≈ 120 MW ERP, for which the DM is almost covered by the Broad Continuum SEE feature [2,3]. For P ˜ 30-60 MW ERP the DM and NDM have comparable intensities. For the lesser pump power the DM prevails in the SEE spectrum, while the NDM becomes invisible being covered by the thermal Narrow Continuum feature [2]. (iii) The NDM is exactly symmetrical for the UM relatively to f0 when the former one is observed, although the UM frequency offset increases up to ∆fUM ≈ +9 kHz with a decrease of the pump power up to P ≈ 4 MW ERP. The DM formation in the SEE spectrum is attributed to a three-wave interaction between the upper and lower hybrid waves in the ionosphere, and the lower hybrid frequency ( 7 kHz) determines the frequency offset of the DM high frequency flank [2,6]. The detection of the NDM with
Distribution of phytoplankton groups within the deep chlorophyll maximum
Latasa, Mikel
2016-11-01
The fine vertical distribution of phytoplankton groups within the deep chlorophyll maximum (DCM) was studied in the NE Atlantic during summer stratification. A simple but unconventional sampling strategy allowed examining the vertical structure with ca. 2 m resolution. The distribution of Prochlorococcus, Synechococcus, chlorophytes, pelagophytes, small prymnesiophytes, coccolithophores, diatoms, and dinoflagellates was investigated with a combination of pigment-markers, flow cytometry and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer. The more symmetrical distribution of chlorophyll than cells around the DCM peak was due to the increase of pigment per cell with depth. We found a vertical alignment of phytoplankton groups within the DCM layer indicating preferences for different ecological niches in a layer with strong gradients of light and nutrients. Prochlorococcus occupied the shallowest and diatoms the deepest layers. Dinoflagellates, Synechococcus and small prymnesiophytes preferred shallow DCM layers, and coccolithophores, chlorophytes and pelagophytes showed a preference for deep layers. Cell size within groups changed with depth in a pattern related to their mean size: the cell volume of the smallest group increased the most with depth while the cell volume of the largest group decreased the most. The vertical alignment of phytoplankton groups confirms that the DCM is not a homogeneous entity and indicates groups’ preferences for different ecological niches within this layer.
Radiation Pressure Acceleration: the factors limiting maximum attainable ion energy
Bulanov, S S; Schroeder, C B; Bulanov, S V; Esirkepov, T Zh; Kando, M; Pegoraro, F; Leemans, W P
2016-01-01
Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it trans...
Decreasing incidence rates of bacteremia
Nielsen, Stig Lønberg; Pedersen, C; Jensen, T G
2014-01-01
BACKGROUND: Numerous studies have shown that the incidence rate of bacteremia has been increasing over time. However, few studies have distinguished between community-acquired, healthcare-associated and nosocomial bacteremia. METHODS: We conducted a population-based study among adults with first......-time bacteremia in Funen County, Denmark, during 2000-2008 (N = 7786). We reported mean and annual incidence rates (per 100,000 person-years), overall and by place of acquisition. Trends were estimated using a Poisson regression model. RESULTS: The overall incidence rate was 215.7, including 99.0 for community......-acquired, 50.0 for healthcare-associated and 66.7 for nosocomial bacteremia. During 2000-2008, the overall incidence rate decreased by 23.3% from 254.1 to 198.8 (3.3% annually, p bacteremia decreased by 25.6% from 119.0 to 93.8 (3.7% annually, p
Life satisfaction decreases during adolescence.
Goldbeck, Lutz; Schmitz, Tim G; Besier, Tanja; Herschbach, Peter; Henrich, Gerhard
2007-08-01
Adolescence is a developmental phase associated with significant somatic and psychosocial changes. So far there are few studies on developmental aspects of life satisfaction. This cross-sectional study examines the effects of age and gender on adolescent's life satisfaction. 1,274 German adolescents (aged 11-16 years) participated in a school-based survey study. They completed the adolescent version of the Questions on Life Satisfaction (FLZ(M) - Fragen zur Lebenszufriedenheit), a multidimensional instrument measuring the subjective importance and satisfaction with eight domains of general and eight domains of health-related life satisfaction. Effects of gender and age were analysed using ANOVAs. Girls reported significantly lower general (F = 5.0; p = .025) and health-related life satisfaction (F = 25.3; p life domains, there was a significant decrease in general (F = 14.8; p life satisfaction (F = 8.0; p Satisfaction with friends remained on a high level, whereas satisfaction with family relations decreased. Only satisfaction with partnership/sexuality increased slightly, however this effect cannot compensate the general loss of satisfaction. Decreasing life satisfaction has to be considered as a developmental phenomenon. Associations with the increasing prevalence of depression and suicidal ideation during adolescence are discussed. Life satisfaction should be considered a relevant aspect of adolescent's well-being and functioning.
U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the percent land cover with potentially restorable wetlands on agricultural land for each 12-digit Hydrologic Unit (HUC) watershed in...
U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the percent of each 12-digit Hydrologic Unit (HUC) subwatershed in the contiguous U.S. with potentially restorable wetlands. Beginning...
Effects of Four Week Body Building Training on Under Skin Fat Percent if Non-Athlete Female Students
Amineh Sahranavard Gargari
2011-09-01
Full Text Available The aim of this study is to study the effect of a training program using weight on under skin fat percent in various body parts of female students of Islamic Azad university of Shabestar. Among 70 students, 40 who had physical education 1,2 course aging 18 to 25 were selected. They were all physically healthy. Using Caliper Under skin fat thickness in areas triceps, Abdomen, femur was measured and categorized using age based woman fat percent estimation table. Average of three times measuring before and after training program was calculated as fat percent using "Raven". Training program by weight consisted of 4 week each containing 3 sessions of 45 min. Results revealed that although most of samples had Lost weight, under skin fat percent before and after program showed significant difference of p<10% yet training program by weight for weight control has been more effective than weight loss.
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry and landsat imagery. Optical data were...
National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral and sand overlaid on bathymetry and landsat imagery. Optical...
Sethulakshmi, N.; Anantharaman, M. R., E-mail: mraiyer@yahoo.com [Department of Physics, Cochin University of Science and Technology, Cochin 682022, Kerala (India); Al-Omari, I. A. [Department of Physics, Sultan Qaboos University, PC 123 Muscat, Sultanate of Oman (Oman); Suresh, K. G. [Department of Physics, Indian Institute of Technology Bombay, Powai, Mumbai 400076 (India)
2014-03-03
Nearly half of lanthanum sites in lanthanum manganites were substituted with monovalent ion-sodium and the compound possessed distorted orthorhombic structure. Ferromagnetic ordering at 300 K and the magnetic isotherms at different temperature ranges were analyzed for estimating magnetic entropy variation. Magnetic entropy change of 1.5 J·kg{sup −1}·K{sup −1} was observed near 300 K. An appreciable magnetocaloric effect was also observed for a wide range of temperatures near 300 K for small magnetic field variation. Heat capacity was measured for temperatures lower than 300 K and the adiabatic temperature change increases with increase in temperature with a maximum of 0.62 K at 280 K.
Changes in atmospheric circulation between solar maximum and minimum conditions in winter and summer
Lee, Jae Nyung
2008-10-01
variability over the Asian monsoon region. The corresponding EOF in ModelE has a qualitatively similar structure but with less variability in the Asian monsoon region which is displaced eastward of its observed position. In both the NCEP/NCAR reanalysis and the GISS GCM, the negative anomalies associated with the NAM in the Euro-Atlantic and Aleutian island regions are enhanced in the solar minimum conditions, though the results are not statistically significant. The difference of the downward propagation of NAM between solar maximum and solar minimum is shown with the NCEP/NCAR reanalysis. For the winter NAM, a much greater fraction of stratospheric circulation perturbations penetrate to the surface in solar maximum conditions than in minimum conditions. This difference is more striking when the zonal wind direction in the tropics is from the west: when equatorial 50 hPa winds are from the west, no stratospheric signals reach the surface under solar minimum conditions, while over 50 percent reach the surface under solar maximum conditions. This work also studies the response of the tropical circulation to the solar forcing in combination with different atmospheric compositions and with different ocean modules. Four model experiments have been designed to investigate the role of solar forcing in the tropical circulation: one with the present day (PD) greenhouse gases and aerosol conditions, one with the preindustrial (PI) conditions, one with the doubled minimum solar forcing, and finally one with the hybrid-isopycnic ocean model (HYCOM). The response patterns in the tropical humidity and in the vertical motion due to solar forcing are season dependent and spatially heterogeneous. The tropical humidity response from the model experiments are compared with the corresponding differences obtained from the NCEP/NCAR reanalysis with all years and with non-ENSO years. Both the model and the reanalysis consistently show that the specific humidity is significantly greater in the
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
无
2003-01-01
A recent survey by Friedman Billings Ramsey (FBR) shows that the oil and gas companies worldwide plan to increase the fund for exploration and development to US$136 billion in 2003, 4.8 percent up from the previous year. The second half of 2003 will see even a higher increase in the fund. In 2002, the exploration and development expenditure of the surveyed oil and gas companies rose 3.6 percent as compared to 2001.
PERCENT FAT MASS AND BODY MASS INDEX AS CARDIORESPIRATORY FITNESS PREDICTORS IN YOUNG ADULTS
Mira Dewi
2016-04-01
Full Text Available ABSTRACTThe present study aimed to analyze the association between body fatness measures, i.e. body mass index (BMI and percent fat mass (% FM with cardiorespiratory fitness (CRF in young adults. Seventy five undergraduate students aged 19-21 years were included in this cross sectional study. Body composition was assessed by tetra polar Bioelectrical Impedance Analysis method, and CRF was determined as VO2 max level by conducting Balke test and flexibility by sit-and-reach test. Regression tests were performed to assess the associations between the body fatness measures and CRF. The mean (SD % FM and BMI were 25.6 (8.3 % and 22.4 (4.2 kg/m2, respectively. Both BMI and % FM were inversely associated with VO2 max and flexibility. The associations of % FM with each CRF measure were stronger (% FM-VO2 max: R2=0.45, p<0.0001; % FM-flexibility: R2=0.16, p<0.0001 than those of BMI (BMI-VO2 max: R2= 0.12, p=0.002; BMI-flexibility: R2=0.07, p<0.0001. Including gender as a variable predictor greatly improved almost all associations. We suggest that %FM is a better predictor for VO2max than BMI. Further studies are needed to elucidate the relationships of body fatness measures adjusted for potential confounding factors with CRF measures other than VO2 max.Keywords: body mass index, cardiorespiratory fitness, percent fat massABSTRAKPenelitian ini bertujuan untuk menentukan hubungan antara persentase lemak tubuh (PLT dan indeks massa tubuh (IMT dengan kebugaran kardiorespiratorik (KKR pada dewasa muda. Penelitian menggunakan desain potong lintang dengan melibatkan 75 orang mahasiswa usia 19-21 tahun. PLT ditentukan dengan metode tetra polar Bioelectrical Impedance dan KKR ditentukan dengan VO2max berdasarkan uji Balke dan fleksibilitas dengan uji sit-and-reach. Hubungan antara PLT dan IMT dengan KKR dianalisis dengan uji regresi. Rata-rata (standar deviasi dari PLT dan IMT berturut-turut adalah 25,6 (8,3% dan 22,4 (4,2 kg/m2. Baik PLT maupun IMT berbanding
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Andra Naresh Kumar Reddy; Dasari Karuna Sagar
2015-01-01
Resolution for the modified point spread function (PSF) of asymmetrically apodized optical systems has been analysed by a new parameter half-width at half-maximum (HWHM) in addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of half-maximum energy in the centroid of modified PSF has been investigated in terms of HWHM on good side and HWHM on bad side. We observed that as the asymmetry in PSF increases, FWHM of the main peak increases and then decreases and is being aided by the degree of amplitude apodization in the central region of slit functions. In the present study, HWHM (half-width at half-maximum) of the resultant PSF has been defined to characterize the resolution of the detection system. It is essentially a line of projection, which measures the width of the main lobe at its half-maximum position from the diffraction centre and has been computed for various amplitudes and antiphase apodizations of the slit aperture. We have noticed that HWHM on the good side decreases at the cost of the increased HWHM on the bad side in the presence of asymmetric apodization.
Liu, Nanfeng; Treitz, Paul
2016-10-01
In this study, digital images collected at a study site in the Canadian High Arctic were processed and classified to examine the spatial-temporal patterns of percent vegetation cover (PVC). To obtain the PVC of different plant functional groups (i.e., forbs, graminoids/sedges and mosses), field near infrared-green-blue (NGB) digital images were classified using an object-based image analysis (OBIA) approach. The PVC analyses comparing different vegetation types confirmed: (i) the polar semi-desert exhibited the lowest PVC with a large proportion of bare soil/rock cover; (ii) the mesic tundra cover consisted of approximately 60% mosses; and (iii) the wet sedge consisted almost exclusively of graminoids and sedges. As expected, the PVC and green normalized difference vegetation index (GNDVI; (RNIR - RGreen)/(RNIR + RGreen)), derived from field NGB digital images, increased during the summer growing season for each vegetation type: i.e., ∼5% (0.01) for polar semi-desert; ∼10% (0.04) for mesic tundra; and ∼12% (0.03) for wet sedge respectively. PVC derived from field images was found to be strongly correlated with WorldView-2 derived normalized difference spectral indices (NDSI; (Rx - Ry)/(Rx + Ry)), where Rx is the reflectance of the red edge (724.1 nm) or near infrared (832.9 nm and 949.3 nm) bands; Ry is the reflectance of the yellow (607.7 nm) or red (658.8 nm) bands with R2's ranging from 0.74 to 0.81. NDSIs that incorporated the yellow band (607.7 nm) performed slightly better than the NDSIs without, indicating that this band may be more useful for investigating Arctic vegetation that often includes large proportions of senescent vegetation throughout the growing season.
Percent recovery of low influent concentrations of microorganism surrogates in small sand columns
Stevenson, M. E.; Blaschke, A. P.
2012-04-01
In order to develop a dependable method to calculate the setback distance of a drinking water well from a potential point of microbiological contamination, surrogates are used to perform field tests to avoid using pathogenic micro-organisms. One such surrogate used to model the potential travel time of microbial contamination is synthetic microspheres. The goal of this study is to examine the effect of differing influent colloid concentrations on the percent recovery of microbial surrogates after passing through a soil column. Similar studies have been done to investigate blocking of ideal attachment sites using concentrations between 106 and 1010 particles ml-1. These high concentrations were necessary due to the detection limit of the measuring technique used; however, our measuring technique allows us to test input concentrations ranging from 101 to 106 particles ml-1. These low concentrations are more similar to the concentrations of pathogenic microorganisms present in nature. We have tested the enumeration of 0.5 μm microspheres using a solid-phase cytometer and evaluated their transport in small sand columns. Fluorescent microspheres were purchased for this study with carboxylated surfaces. The soil columns consist of Plexiglas tubes, 30 cm long and 7 cm in diameter, both filled with the same coarse sand. Bromide was used as a conservative tracer, to estimate pore-water velocity and dispersivity, and bromide concentrations were analysed using ion chromatography and bromide probes. Numerical modelling was done using CXTFIT and HYDRUS-1D software programs. The 0.5 μm beads were enumerated in different environmental waters using solid-phase cytometry and compared to counts in sterile water in order to confirm the accuracy of the method. The solid-phase cytometer was able to differentiate the 0.5 μm beads from naturally present autofluorescent particles and bacteria, and therefore, is an appropriate method to enumerate this surrogate.
Evaluation of the BOD POD for estimating percent fat in female college athletes.
Vescovi, Jason D; Hildebrandt, Leslie; Miller, Wayne; Hammer, Roger; Spiller, Amanda
2002-11-01
The purpose of this investigation was to examine the accuracy of percent body fat (%BF) estimates obtained by air displacement plethysmography (ADP) using the BOD POD Body Composition System compared with hydrostatic weighing (HW) in a group of female college athletes (n = 80). In addition, %BF estimates by skinfold measures (SF) were also obtained for comparison. A lean subset (n = 39) of the sample was also examined. Mean %BF estimated for the entire sample by ADP (21.2 +/- 5.9%) was significantly greater than that determined by HW (19.4 +/- 6.4%) and SF (18.8 +/- 5.5%). Results from the lean subset also revealed that %BF determined by ADP (17.1 +/- 3.7%) was significantly higher than %BF estimates by HW (14.3 +/- 2.8%) and SF (15.2 +/- 3.2%). The regression equation for the entire sample (%BF HW = 0.937%BF ADP - 0.452, r(2) = 0.73, standard error of estimates (SEE) = 3.34) did not differ from the line of identity. In contrast, the line of identity differed significantly from the regression equation for the lean subset of female athletes (%BF HW = 0.48%BF ADP + 6.115, r(2) = 0.41, SEE = 2.18). The results of this investigation indicate that ADP significantly overestimated %BF by 8% in female athletes and by 16% for a leaner subset of the sample compared with HW. It appears that %BF estimates by SF may be more accurate than those obtained by ADP for female college athletes, regardless of body composition. Coaches and trainers evaluating body composition should consider the use of SF before ADP when measuring %BF in female college athletes. Sports scientists should continue to examine the possible gender and body composition bias for ADP.
Measuring adiposity in patients: the utility of body mass index (BMI, percent body fat, and leptin.
Nirav R Shah
Full Text Available BACKGROUND: Obesity is a serious disease that is associated with an increased risk of diabetes, hypertension, heart disease, stroke, and cancer, among other diseases. The United States Centers for Disease Control and Prevention (CDC estimates a 20% obesity rate in the 50 states, with 12 states having rates of over 30%. Currently, the body mass index (BMI is most commonly used to determine adiposity. However, BMI presents as an inaccurate obesity classification method that underestimates the epidemic and contributes to failed treatment. In this study, we examine the effectiveness of precise biomarkers and duel-energy x-ray absorptiometry (DXA to help diagnose and treat obesity. METHODOLOGY/PRINCIPAL FINDINGS: A cross-sectional study of adults with BMI, DXA, fasting leptin and insulin results were measured from 1998-2009. Of the participants, 63% were females, 37% were males, 75% white, with a mean age = 51.4 (SD = 14.2. Mean BMI was 27.3 (SD = 5.9 and mean percent body fat was 31.3% (SD = 9.3. BMI characterized 26% of the subjects as obese, while DXA indicated that 64% of them were obese. 39% of the subjects were classified as non-obese by BMI, but were found to be obese by DXA. BMI misclassified 25% men and 48% women. Meanwhile, a strong relationship was demonstrated between increased leptin and increased body fat. CONCLUSIONS/SIGNIFICANCE: Our results demonstrate the prevalence of false-negative BMIs, increased misclassifications in women of advancing age, and the reliability of gender-specific revised BMI cutoffs. BMI underestimates obesity prevalence, especially in women with high leptin levels (>30 ng/mL. Clinicians can use leptin-revised levels to enhance the accuracy of BMI estimates of percentage body fat when DXA is unavailable.
Skedros, John G; Kiser, Casey J; Mendenhall, Shaun D
2011-01-01
Using circularly polarized light microscopy,we described a weighted-scoring method for quantifying regional distributions of six secondary osteon morphotypes(Skedros et al.: Bone 44 (2009) 392-403). This osteon morphotype score (MTS) strongly correlated with "tension" and "compression" cortices produced by habitual bending. In the present study, we hypothesized that the osteon MTS is superior to a relatively simpler method based on the percent prevalence (PP) of these osteon morphotypes. This was tested in proximal femoral diaphyses of adult chimpanzees and habitually bent bones: calcanei from sheep, deer, and horses, radii from sheep and horses, and third metacarpals (MC3s) from horses. Sheep tibiae were examined because their comparatively greater torsion/shear would not require regional variations in osteon morphotypes. Predominant collagen fiber orientation (CFO), a predictor of regionally prevalent/predominant strain mode, was quantified as image gray levels (birefringence). Ten PP calculations were conducted. Although PP calculations were similar to the osteon MTS in corroborating CFO differences between "tension" and "compression" cortices of the chimpanzee femora and most of the habitually bent bones, PP calculations failed to show a compression/tension difference in equine MC3s and sheep radii. With the exception of the prevalence of the "distributed" osteon morphotype, correlations of PP calculations with CFO were weak and/or negative. By contrast, the osteon MTS consistently showed positive correlations with predominant CFO. Compared with the osteon MTS and predominant CFO, regional variations in PP of osteon morpho types are not stronger predictors of nonuniform strain distributions produced by bending.
Relationships between body size and percent body fat among Melanesians in Vanuatu.
Dancause, Kelsey Needham; Vilar, Miguel; DeHuff, Christa; Wilson, Michelle; Soloway, Laura E; Chan, Chim; Lum, J Koji; Garruto, Ralph M
2010-01-01
Obesity is a global epidemic, and measures to define it must be appropriate for diverse populations for accurate assessment of worldwide risk. Obesity refers to excess body fatness, but is more commonly defined by body mass index (BMI). Body composition varies among populations: Asians have higher percent body fat (%BF), and Pacific Islanders lower %BF at a given BMI compared to Europeans. Many researchers thus propose higher BMI cut-off points for obesity among Pacific Islanders and lower cut-offs for Asians. Because of the great genetic diversity in the Asia-Pacific region, more studies analyzing associations between BMI and %BF among diverse populations remain necessary. We measured height; weight; tricep, subscapular, and suprailiac skinfolds; waist and hip circumference; and %BF by bioelectrical impedance among 546 adult Melanesians from Vanuatu in the South Pacific. We analyzed relationships among anthropometric measurements and compared them to measurements from other populations in the Asia-Pacific region. BMI was a relatively good predictor of %BF among our sample. Based on regression analyses, the BMI value associated with obesity defined by %BF (>25% for men, >35% for women) at age 40 was 27.9 for men and 27.8 for women. This indicates a need for a more nuanced definition of obesity than provided by the common BMI cut-off value of 30. Rather than using population-specific cut-offs for Pacific Islanders, we suggest the World Health Organization's public health action cut-off points (23, 27.5, 32.5, 37.5), which enhance the precision of assessments of population-wide obesity burdens while still allowing for international comparison.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Decreasing Fires in Mediterranean Europe.
Marco Turco
Full Text Available Forest fires are a serious environmental hazard in southern Europe. Quantitative assessment of recent trends in fire statistics is important for assessing the possible shifts induced by climate and other environmental/socioeconomic changes in this area. Here we analyse recent fire trends in Portugal, Spain, southern France, Italy and Greece, building on a homogenized fire database integrating official fire statistics provided by several national/EU agencies. During the period 1985-2011, the total annual burned area (BA displayed a general decreasing trend, with the exception of Portugal, where a heterogeneous signal was found. Considering all countries globally, we found that BA decreased by about 3020 km2 over the 27-year-long study period (i.e. about -66% of the mean historical value. These results are consistent with those obtained on longer time scales when data were available, also yielding predominantly negative trends in Spain and France (1974-2011 and a mixed trend in Portugal (1980-2011. Similar overall results were found for the annual number of fires (NF, which globally decreased by about 12600 in the study period (about -59%, except for Spain where, excluding the provinces along the Mediterranean coast, an upward trend was found for the longer period. We argue that the negative trends can be explained, at least in part, by an increased effort in fire management and prevention after the big fires of the 1980's, while positive trends may be related to recent socioeconomic transformations leading to more hazardous landscape configurations, as well as to the observed warming of recent decades. We stress the importance of fire data homogenization prior to analysis, in order to alleviate spurious effects associated with non-stationarities in the data due to temporal variations in fire detection efforts.
Technologies for Decreasing Mining Losses
Valgma, Ingo; Väizene, Vivika; Kolats, Margit; Saarnak, Martin
2013-12-01
In case of stratified deposits like oil shale deposit in Estonia, mining losses depend on mining technologies. Current research focuses on extraction and separation possibilities of mineral resources. Selective mining, selective crushing and separation tests have been performed, showing possibilities of decreasing mining losses. Rock crushing and screening process simulations were used for optimizing rock fractions. In addition mine backfilling, fine separation, and optimized drilling and blasting have been analyzed. All tested methods show potential and depend on mineral usage. Usage in addition depends on the utilization technology. The questions like stability of the material flow and influences of the quality fluctuations to the final yield are raised.
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
"The Correlation between the Percent of CD3- CD56+ Cells and NK Precursor Function "
Ahmad Gharehbaghian
2006-12-01
cytometry showed no correlation between the NKpf (natural killer precursor frequency and the percent of CD3-CD56+ cells expressed after five days confirming that CD56 was inadequate as a unique marker for functional NK cells.
Hui, Wang; Jiahui, Liu; Hongshuai, Yang; Jin, Liu; Zhigang, Liu
2014-04-01
The combined effects of temperature and ammonia concentration on the percent fertilization and percent hatching in Crassostrea ariakensis were examined under laboratory conditions using the central composite design and response surface methodology. The results indicated: (1) The linear effects of temperature and ammonia concentration on the percent fertilization were significant (Pfertilization was not significant (P>0.05). (2) The linear effect of temperature on the percent hatching was highly significant (P0.05). The quadratic effects of temperature and ammonia concentration on the percent hatching were highly significant (P0.05). Temperature was more important than ammonia in influencing the fertilization and hatching in C. ariakensis. (3) The model equations of the percent fertilization and hatching towards temperature and ammonia concentration were established, with the coefficients of determination R(2)=99.4% and 99.76%, respectively. Through the lack-of-fit test, these models were of great adequacy. The predictive coefficients of determination for the two model equations were as high as 94.6% and 98.03%, respectively, showing that they could be used for practical projection. (4) Via the statistical simultaneous optimization technique, the optimal factor level combination, i.e., 25°C/0.038mgmL(-1), was derived, at which the greatest percent fertilization 95.25% and hatching 83.26% was achieved, with the desirability being 97.81%. Our results may provide advantageous guidelines for the successful reproduction of C. ariakensis. Copyright © 2014 Elsevier Ltd. All rights reserved.
Rigidity spectrum of Forbush decrease
Sakakibara, S.; Munakata, K.; Nagashima, K.
1985-01-01
Using data from neutron monitors and muon telescopes at surface and underground stations, the average rigidity spectrum of Forbush decreases (Fds) during the period of 1978-1982 were obtained. Thirty eight Ed-events are classified into two groups Hard Fd and Soft Fd according to size of Fd at Sakashita station. It is found that a spectral form of fractional-power type (P to the-gamma sub 1 (P+P sub c) to the -gamma sub2) is more suitable for the present purpose than that of power-exponential type or of power type with an upper limiting rigidity. The best fitted spectrum of fractional-power type is expressed by gamma sub1 = 0.37, gamma sub2 = 0.89 and P subc = 10 GV for Hard Fd and gamma sub1 = 0.77, gamma sub2 = 1.02 and P sub c - 14GV for Soft Fd.
Hyperhomocysteinemia decreases bone blood flow
Neetu T
2011-01-01
Full Text Available Neetu Tyagi*, Thomas P Vacek*, John T Fleming, Jonathan C Vacek, Suresh C TyagiDepartment of Physiology and Biophysics, School of Medicine, University of Louisville, Louisville, KY, USA *These authors have equal authorshipAbstract: Elevated plasma levels of homocysteine (Hcy, known as hyperhomocysteinemia (HHcy, are associated with osteoporosis. A decrease in bone blood flow is a potential cause of compromised bone mechanical properties. Therefore, we hypothesized that HHcy decreases bone blood flow and biomechanical properties. To test this hypothesis, male Sprague–Dawley rats were treated with Hcy (0.67 g/L in drinking water for 8 weeks. Age-matched rats served as controls. At the end of the treatment period, the rats were anesthetized. Blood samples were collected from experimental or control rats. Biochemical turnover markers (body weight, Hcy, vitamin B12, and folate were measured. Systolic blood pressure was measured from the right carotid artery. Tibia blood flow was measured by laser Doppler flow probe. The results indicated that Hcy levels were significantly higher in the Hcy-treated group than in control rats, whereas vitamin B12 levels were lower in the Hcy-treated group compared with control rats. There was no significant difference in folate concentration and blood pressure in Hcy-treated versus control rats. The tibial blood flow index of the control group was significantly higher (0.78 ± 0.09 flow unit compared with the Hcy-treated group (0.51 ± 0.09. The tibial mass was 1.1 ± 0.1 g in the control group and 0.9 ± 0.1 in the Hcy-treated group. The tibia bone density was unchanged in Hcy-treated rats. These results suggest that Hcy causes a reduction in bone blood flow, which contributes to compromised bone biomechanical properties.Keywords: homocysteine, tibia, bone density
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Mckeown, A B; Belles, Frank E
1954-01-01
Total vapor pressures were measured for 16 acid mixtures of the ternary system nitric acid, nitrogen dioxide, and water within the temperature range 10 degrees to 60 degrees Celsius, and with the composition range 71 to 89 weight percent nitric acid, 7 to 20 weight percent nitrogen dioxide, and 1 to 10 weight percent water. Heats of vaporization were calculated from the vapor pressure measurements for each sample for the temperatures 25, 40, and 60 degrees Celsius. The ullage of the apparatus used for the measurements was 0.46. Ternary diagrams showing isobars as a function of composition of the system were constructed from experimental and interpolated data for the temperatures 25, 40, 45, and 60 degrees C and are presented herein.
Modeling the Maximum Spreading of Liquid Droplets Impacting Wetting and Nonwetting Surfaces.
Lee, Jae Bong; Derome, Dominique; Guyer, Robert; Carmeliet, Jan
2016-02-09
Droplet impact has been imaged on different rigid, smooth, and rough substrates for three liquids with different viscosity and surface tension, with special attention to the lower impact velocity range. Of all studied parameters, only surface tension and viscosity, thus the liquid properties, clearly play a role in terms of the attained maximum spreading ratio of the impacting droplet. Surface roughness and type of surface (steel, aluminum, and parafilm) slightly affect the dynamic wettability and maximum spreading at low impact velocity. The dynamic contact angle at maximum spreading has been identified to properly characterize this dynamic spreading process, especially at low impact velocity where dynamic wetting plays an important role. The dynamic contact angle is found to be generally higher than the equilibrium contact angle, showing that statically wetting surfaces can become less wetting or even nonwetting under dynamic droplet impact. An improved energy balance model for maximum spreading ratio is proposed based on a correct analytical modeling of the time at maximum spreading, which determines the viscous dissipation. Experiments show that the time at maximum spreading decreases with impact velocity depending on the surface tension of the liquid, and a scaling with maximum spreading diameter and surface tension is proposed. A second improvement is based on the use of the dynamic contact angle at maximum spreading, instead of quasi-static contact angles, to describe the dynamic wetting process at low impact velocity. This improved model showed good agreement compared to experiments for the maximum spreading ratio versus impact velocity for different liquids, and a better prediction compared to other models in literature. In particular, scaling according to We(1/2) is found invalid for low velocities, since the curves bend over to higher maximum spreading ratios due to the dynamic wetting process.
SAVANNAH RIVER SITE TANK CLEANING: CORROSION RATE FOR ONE VERSUS EIGHT PERCENT OXALIC ACID SOLUTION
Ketusky, E.; Subramanian, K.
2011-01-20
Until recently, the use of oxalic acid for chemically cleaning the Savannah River Site (SRS) radioactive waste tanks focused on using concentrated 4 and 8-wt% solutions. Recent testing and research on applicable dissolution mechanisms have concluded that under appropriate conditions, dilute solutions of oxalic acid (i.e., 1-wt%) may be more effective. Based on the need to maximize cleaning effectiveness, coupled with the need to minimize downstream impacts, SRS is now developing plans for using a 1-wt% oxalic acid solution. A technology gap associated with using a 1-wt% oxalic acid solution was a dearth of suitable corrosion data. Assuming oxalic acid's passivation of carbon steel was proportional to the free oxalate concentration, the general corrosion rate (CR) from a 1-wt% solution may not be bound by those from 8-wt%. Therefore, after developing the test strategy and plan, the corrosion testing was performed. Starting with the envisioned process specific baseline solvent, a 1-wt% oxalic acid solution, with sludge (limited to Purex type sludge-simulant for this initial effort) at 75 C and agitated, the corrosion rate (CR) was determined from the measured weight loss of the exposed coupon. Environmental variations tested were: (a) Inclusion of sludge in the test vessel or assuming a pure oxalic acid solution; (b) acid solution temperature maintained at 75 or 45 C; and (c) agitation of the acid solution or stagnant. Application of select electrochemical testing (EC) explored the impact of each variation on the passivation mechanisms and confirmed the CR. The 1-wt% results were then compared to those from the 8-wt%. The immersion coupons showed that the maximum time averaged CR for a 1-wt% solution with sludge was less than 25-mils/yr for all conditions. For an agitated 8-wt% solution with sludge, the maximum time averaged CR was about 30-mils/yr at 50 C, and 86-mils/yr at 75 C. Both the 1-wt% and the 8-wt% testing demonstrated that if the sludge was removed
The directed flow maximum near cs = 0
Brachmann, J.; Dumitru, A.; Stöcker, H.; Greiner, W.
2000-07-01
We investigate the excitation function of quark-gluon plasma formation and of directed in-plane flow of nucleons in the energy range of the BNL-AGS and for the E {Lab/kin} = 40 AGeV Pb + Pb collisions performed recently at the CERN-SPS. We employ the three-fluid model with dynamical unification of kinetically equilibrated fluid elements. Within our model with first-order phase transition at high density, droplets of QGP coexisting with hadronic matter are produced already at BNL-AGS energies, E {Lab/kin} ≃ 10 AGeV. A substantial decrease of the isentropic velocity of sound, however, requires higher energies, E {Lab/kin} ≃ 0 AGeV. We show the effect on the flow of nucleons in the reaction plane. According to our model calculations, kinematic requirements and EoS effects work hand-in-hand at E {Lab/kin} = 40 AGeV to allow the observation of the dropping velocity of sound via an increase of the directed flow around midrapidity as compared to top BNL-AGS energy.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Llanes Iglesias, José
2007-09-01
Full Text Available ResumenEl objetivo de este estudio fue evaluar cinco dietas húmedas con diferentes porcentajes de inclusión (40, 50, 60, 70 y 80 de ensilado químico de pescado, preparado con 2% de ácido sulfúrico 98% (peso/volumen, como única fuente de proteína animal, las que fueron comparadas con un Alimento Comercial (20% de harina de pescado en la alimentación de alevines de Clarias gariepinus. Se utilizó un diseño experimental completamente aleatorizado durante 60 días. Los resultados mostraron que los indicadores de crecimiento, utilización del alimento y supervivencia no presentaron diferencias significativas (P>0,05 hasta 80% de inclusión de esta materia prima, representando un ahorro de 302,00 USD/ tonelada de pescado producidoAbstractThe objetive of this study was to evaluate five moist diets with graded levels (40, 50, 60, 70 and 80 of chemical fish silage prepared with 2% sulfuric acid 98% (weigth/volumen, as a unique animal protein source. They were compared with comercial foods (20% of fish meal, Clarias gariepinus feeding. A completely randomised experimental design was development during 60 days. The results of the indicators such as: growth, feed utilization and suvival rate did not differ significantly (P> 0,05, until 80% chemical fish silage, representing a $ 302,00/ ton saving fish produced
2010-04-01
... homebuyer payment can a recipient charge a low-income rental tenant or homebuyer residing in housing units... Activities § 1000.124 What maximum and minimum rent or homebuyer payment can a recipient charge a low-income... charge a low-income rental tenant or homebuyer rent or homebuyer payments not to exceed 30 percent of...
Maximum speeds and alpha angles of flowing avalanches
McClung, David; Gauer, Peter
2016-04-01
um/√H0--; um/√S0- are greater than 2.0 and 1.3 respectively. In addition, to: um/√H0--; um/√S0--, we collected 105 companion values of the αangle for runout positions defined by tanα = H0/X0where X0is horizontal reach calculated from start position to stop position of the tip of the avalanches. The αangle is a very simple measure of runout introduced by Scheidegger (1973) for rock avalanches. McClung and Mears (1991) collected αangles from more than 500 paths with maximum runout estimated for return periods on the order of 100 years and the range of values was: 18o - 42owhich is close to that here: (20o - 45o). The results showed that runout increases(α decreases) with maximum speed but there is considerable scatter in the relationship. The Spearman rank correlation is -0.54 (p < 0.005).Rank correlations of α vs. um/√S0-;um/√H0- are - 0.44;.- 0.56 (both with p < 0.005
Macrosegregation during Plane Front Solidification of Cesium Iodide wt Percent Thallium Iodide Alloy
Sidawi, Ibrahim M. S.
Macrosegregation produced during directional solidification of CsI-1 wt% TlI by vertical Bridgman technique has been examined in crucibles of varying diameter, from 0.5 to 2.0 cm. Phase diagram and temperature dependence of the thermal conductivity have been determined. The experimentally observed liquid-solid interface shape and the fluid flow behavior have been compared with that computed from the commercially available code FIDAP. Thallium iodide content of the alloy was observed to increase along the length of the directionally solidified specimens, resulting in continuously decreasing light output. The experimentally observed solutal distribution agrees with predictions from the boundary layer model of Favier. The observed macrosegregation behavior suggests that there is a significant convection in the melt even in the smallest crucible diameter of 0.5 cm.
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
Hegde S
1996-10-01
Full Text Available Percent body fat content was found in apparently normal healthy 30 young (17-20 Yrs. and 30 middle aged men (30-46 Yrs. by measuring the skinfold and girth. None of the subjects were athletes or did regular physical exercise. Body density was calculated using mean of the four skinfold measurements as per the equations advocated by Durnin and Womersley, while percent body fat content was calculated from the body density by the Siri′s equation. The mean % body fat content by this method in young men was 15.87 +/- 3.85% and in middle aged men was 24.75 +/- 3.55%. Ten percent of the young subjects and 90% of the middle aged subjects were found to be obese. Percent body fat content was also calculated from the girth measurements as advocated by McArdle et al. The mean of % body fat content with this method was 14.91 +/- 3.82% in young men and 24.30 +/- 3.35% in middle aged men. On comparison, the difference in percent body fat content calculated by both the methods was found to be significant in young men but not for middle aged men. The correlation, coefficient between girth method and skinfold method was 0.95 in case of young men and 0.90 for middle aged men. Therefore, we advocate that girth measurements can be used to determine percent body fat content, main advantage being simplicity of technique and requirement of inexpensive instruments for measurement.
Kirkegaard, Poul Henning; Nielsen, Søren R.K.; Micaletti, R. C.;
This paper considers estimation of the Maximum Damage Indicator (MSDI) by using time-frequency system identification techniques for an RC-structure subjected to earthquake excitation. The MSDI relates the global damage state of the RC-structure to the relative decrease of the fundamental eigenfre...
Hyland, R. E.; Wohl, M. L.; Finnegan, P. M.
1973-01-01
A preliminary study was conducted of the feasibility of space disposal of the actinide class of radioactive waste material. This waste was assumed to contain 1 and 0.1 percent residual fission products, since it may not be feasible to completely separate the actinides. The actinides are a small fraction of the total waste but they remain radioactive much longer than the other wastes and must be isolated from human encounter for tens of thousands of years. Results indicate that space disposal is promising but more study is required, particularly in the area of safety. The minimum cost of space transportation would increase the consumer electric utility bill by the order of 1 percent for earth escape and 3 percent for solar escape. The waste package in this phase of the study was designed for normal operating conditions only; the design of next phase of the study will include provisions for accident safety. The number of shuttle launches per year required to dispose of all U.S. generated actinide waste with 0.1 percent residual fission products varies between 3 and 15 in 1985 and between 25 and 110 by 2000. The lower values assume earth escape (solar orbit) and the higher values are for escape from the solar system.
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
L Andreacci, Joseph; B Dixon, Curt; Ledezma, Christina; L Goss, Fredric
2006-01-01
The purpose of this investigation was to determine the effect of intermittent sub-maximal exercise on percent body fat (%BF) estimated by leg-to-leg bioelectrical impedance analysis (LBIA) in children. Fifty-nine children (29 girls; 30 boys) mean age 9.0 ± 1.3 years participated in this study. LBIA measured %BF values were obtained immediately before and within five minutes after completing an intermittent exercise protocol consisting of three 8-minute sub-maximal exercise bouts (2.74 km·hr(-1), 0% grade; 4.03 km·hr(-1), 0% grade; and 5.47 km·hr(-1), 0% grade) each separated by a 5-min seated rest period. The three exercise bouts corresponded to 56%, 61% and 71% of maximal heart rate. Significant differences (p < 0.001) were observed for fat mass, fat free mass, total body water, and body weight, post-exercise in both groups. Significant reductions (p < 0.001) in %BF were observed post-exercise in the female (23.1 ± 9.9 vs. 21.8 ± 9. 9 %) and male (23.3 ± 10.5 vs. 21.8 ± 10.2 %) children when compared to pre-exercise values. However, for the majority of the subjects (females = 86%; males = 73%) the decrease in %BF post- exercise was less than 2.0 %BF. These data indicate that sub-maximal intermittent exercise, that may be representative of daily free-form activities in children, will most likely have a limited impact on %BF estimates when the assessment is performed immediately post-exercise. Key PointsLBIA measures of body weight, percent body fat, fat mass, fat free mass and total body water were significantly lower after the intermittent sub-maximal exercise.The reductions in percent body fat for girls (1.4%) and boys (1.5%) compare favorably to previous investigations.Intermittent exercise, that may be representative of daily free-form activities in children, will most likely have a limited impact on LBIA percent body fat estimates.
Obesity Decreases Perioperative Tissue Oxygenation
Kabon, Barbara; Nagele, Angelika; Reddy, Dayakar; Eagon, Chris; Fleshman, James W.; Sessler, Daniel I.; Kurz, Andrea
2005-01-01
Background: Obesity is an important risk factor for surgical site infections. The incidence of surgical wound infections is directly related to tissue perfusion and oxygenation. Fat tissue mass expands without a concomitant increase in blood flow per cell, which might result in a relative hypoperfusion with decreased tissue oxygenation. Consequently, we tested the hypotheses that perioperative tissue oxygen tension is reduced in obese surgical patients. Furthermore, we compared the effect of supplemental oxygen administration on tissue oxygenation in obese and non-obese patients. Methods: Forty-six patients undergoing major abdominal surgery were assigned to one of two groups according to their body mass index (BMI): BMI < 30 kg/m2 (non-obese) and BMI ≥ 30 kg/m2 (obese). Intraoperative oxygen administration was adjusted to arterial oxygen tensions of ≈150 mmHg and ≈300 mmHg in random order. Anesthesia technique and perioperative fluid management were standardized. Subcutaneous tissue oxygen tension was measured with a polarographic electrode positioned within a subcutaneous tonometer in the lateral upper arm during surgery, in the recovery room, and on the first postoperative day. Postoperative tissue oxygen was also measured adjacent to the wound. Data were compared with unpaired two tailed t-tests and Wilcoxon rank-sum tests; P < 0.05 was considered statistically significant. Results: Intraoperative subcutaneous tissue oxygen tension was significantly less in the obese patients at baseline (36 vs. 57 mmHg, P = 0.002) and with supplemental oxygen administration (47 vs. 76 mmHg, P = 0.014). Immediate postoperative tissue oxygen tension was also significantly less in subcutaneous tissue of the upper arm (43 vs. 54 mmHg, P = 0.011) as well as near the incision (42 vs. 62 mmHg, P = 0.012) in obese patients. In contrast, tissue oxygen tension was comparable in each group on the first postoperative morning. Conclusion: Wound and tissue hypoxia were common in obese
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
2010-07-01
... waste? (a) Use EPA Reference Method 9 in Appendix A of 40 CFR part 60 to determine compliance with the opacity limit. (b) Conduct an initial test for opacity as specified in § 60.8 of subpart A of 40 CFR part... curtain incinerators that burn 100 percent yard waste? 62.15380 Section 62.15380 Protection of...
Muzheve, Michael T.; Capraro, Robert M.
2012-01-01
Using qualitative data collection and analyses techniques, we examined mathematical representations used by sixteen (N=16) teachers while teaching the concepts of converting among fractions, decimals, and percents. We also studied representational choices by their students (N=581). In addition to using geometric figures and manipulatives, teachers…
Rimbey, Kimberly
2007-01-01
Created by teachers for teachers, the Math Academy tools and activities included in this booklet were designed to create hands-on activities and a fun learning environment for the teaching of mathematics to the students. This booklet contains the "Math Academy--Dining Out! Explorations in Fractions, Decimals, and Percents," which teachers can use…
Asymptomatic endemic Chlamydia pecorum infections reduce growth rates in calves by up to 48 percent.
Anil Poudel
Full Text Available Intracellular Chlamydia (C. bacteria cause in cattle some acute but rare diseases such as abortion, sporadic bovine encephalomyelitis, kerato-conjunctivitis, pneumonia, enteritis and polyarthritis. More frequent, essentially ubiquitous worldwide, are low-level, asymptomatic chlamydial infections in cattle. We investigated the impact of these naturally acquired infections in a cohort of 51 female Holstein and Jersey calves from birth to 15 weeks of age. In biweekly sampling, we measured blood/plasma markers of health and infection and analyzed their association with clinical appearance and growth in dependence of chlamydial infection intensity as determined by mucosal chlamydial burden or contemporaneous anti-chlamydial plasma IgM. Chlamydia 23S rRNA gene PCR and ompA genotyping identified only C. pecorum (strains 1710S, Maeda, and novel strain Smith3v8 in conjunctival and vaginal swabs. All calves acquired the infection but remained clinically asymptomatic. High chlamydial infection associated with reduction of body weight gains by up to 48% and increased conjunctival reddening (P<10(-4. Simultaneously decreased plasma albumin and increased globulin (P<10(-4 suggested liver injury by inflammatory mediators as mechanisms for the growth inhibition. This was confirmed by the reduction of plasma insulin like growth factor-1 at high chlamydial infection intensity (P<10(-4. High anti-C. pecorum IgM associated eight weeks later with 66% increased growth (P = 0.027, indicating a potential for immune protection from C. pecorum-mediated growth depression. The worldwide prevalence of chlamydiae in livestock and their high susceptibility to common feed-additive antibiotics suggests the possibility that suppression of chlamydial infections may be a major contributor to the growth promoting effect of feed-additive antibiotics.
Severe geomagnetic storms and Forbush decreases: interplanetary relationships reexamined
R. P. Kane
2010-02-01
Full Text Available Severe storms (Dst and Forbush decreases (FD during cycle 23 showed that maximum negative Dst magnitudes usually occurred almost simultaneously with the maximum negative values of the B_{z} component of interplanetary magnetic field B, but the maximum magnitudes of negative Dst and B_{z} were poorly correlated (+0.28. A parameter B_{z}(CP was calculated (cumulative partial B_{z} as sum of the hourly negative values of B_{z} from the time of start to the maximum negative value. The correlation of negative Dst maximum with B_{z}(CP was higher (+0.59 as compared to that of Dst with B_{z} alone (+0.28. When the product of B_{z} with the solar wind speed V (at the hour of negative B_{z} maximum was considered, the correlation of negative Dst maximum with VB_{z} was +0.59 and with VB_{z}(CP, 0.71. Thus, including V improved the correlations. However, ground-based Dst values have a considerable contribution from magnetopause currents (several tens of nT, even exceeding 100 nT in very severe storms. When their contribution is subtracted from Dst(nT, the residue Dst* representing true ring current effect is much better correlated with B_{z} and B_{z}(CP, but not with VB_{z} or VB_{z}(CP, indicating that these are unimportant parameters and the effect of V is seen only through the solar wind ram pressure causing magnetopause currents. Maximum negative Dst (or Dst* did not occur at the same hour as maximum FD. The time evolutions of Dst and FD were very different. The correlations were almost zero. Basically, negative Dst (or Dst* and FDs are uncorrelated, indicating altogether different mechanism.
Shoveller, Anna K; DiGennaro, Joe; Lanman, Cynthia; Spangler, Dawn
2014-12-01
Body condition scoring (BCS) provides a readily available technique that can be used by both veterinary professionals and owners to assess the body condition of cats, and diagnose overweight or underweight conditions. The objective of this study was to evaluate a five-point BCS system with half-point delineations using dual-energy x-ray absorptiometry (DXA). Four evaluators (a veterinarian, veterinary technician, trained scorer and untrained scorer) assessed 133 neutered adult cats. For all scorers, BCS score was more strongly correlated with percent body fat than with body weight. Percent body fat increased by approximately 7% within each step increase in BCS. The veterinarian had the strongest correlation coefficient between BCS and percent fat (r = 0.80). Mean body fat in cats classified as being in ideal body condition was 12 and 19%, for 3.0 and 3.5 BCS, respectively. Within BCS category, male cats were significantly heavier in body weight than females within the same assigned BCS category. However, DXA-measured percent body fat did not differ significantly between male and female cats within BCS category, as assigned by the veterinarian (P >0.13). Conversely, when assessed by others, mean percent body fat within BCS category was lower in males than females for cats classified as being overweight (BCS >4.0). The results of this study show that using a BCS system that has been validated within a range of normal weight to moderately overweight cats can help to differentiate between lean cats and cats that may not be excessively overweight, but that still carry a higher proportion of body fat.
Hodder, Joanne N; Keir, Peter J
2013-10-01
Muscle specific maximal voluntary isometric contractions (MVIC) are commonly used to elicit reference amplitudes to normalize electromyographic signals (EMG). It has been questioned whether this is appropriate for normalizing EMG from dynamic contractions. This study compares EMG amplitude when shoulder muscle activity from dynamic contractions is normalized to isometric and isokinetic maximal excitation as well as a hybrid approach currently used in our laboratory. Anterior, middle and posterior deltoid, upper and lower trapezius, pectoralis major, latissimus dorsi and infraspinatus were monitored during (1) manually resisted MVICs, and (2) maximum voluntary dynamic concentric contractions (MVDC) on an isokinetic dynamometer. Dynamic contractions were performed (a) at 30°/s about the longitudinal, frontal and sagittal axes of the shoulder, and (b) during manual bi-rotation of a tilted wheel at 120°/s. EMG from the wheel task was normalized to the maximum excitation from (i) the muscle specific MVIC, (ii) from any MVIC (MVICALL), (iii) for any MVDC, (iv) from any exertion (maximum experimental excitation, MEE). Mean EMG from the wheel task was up to 45% greater when normalized to muscle specific isometric contractions (method i) than when normalized to MEE (method iv). Seventy-five percent of MEE's occurred during MVDCs. This study presents an 20 useful and effective process for obtaining the greatest excitation from the shoulder muscles when normalizing dynamic efforts.
Modeling Mediterranean ocean climate of the Last Glacial Maximum
U. Mikolajewicz
2010-10-01
Full Text Available A regional ocean general circulation model of the Mediterranean is used to study the climate of the last glacial maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions nontrivial. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of the salinity in the Mediterranean in spite of reduced net evaporation.
Modeling Mediterranean Ocean climate of the Last Glacial Maximum
U. Mikolajewicz
2011-03-01
Full Text Available A regional ocean general circulation model of the Mediterranean is used to study the climate of the Last Glacial Maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions complicated. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of salinity in the Mediterranean in spite of reduced net evaporation.
Hales, K E; Foote, A P; Brown-Brandl, T M; Freetly, H C
2015-01-01
Expansion of the biodiesel industry has increased the glycerin (GLY) supply. Glycerin is an energy-dense feed that can be used in ruminant species; however, the energy value of GLY is not known. Therefore, the effects of GLY inclusion at 0, 5, 10, and 15% on energy balance in finishing cattle diets were evaluated in 8 steers (BW = 503 kg) using a replicated Latin square design. Data were analyzed with the fixed effects of dietary treatment and period, and the random effects of square and steer within square were included in the model. Contrast statements were used to separate linear and quadratic effects of GLY inclusion. Glycerin replaced dry-rolled corn (DRC) at 0, 5, 10, and 15% of dietary DM. Dry matter intake decreased linearly (P = 0.02) as GLY increased in the diet. As a proportion of GE intake, fecal energy loss tended to decrease linearly (P 0.31) as a proportion of GE as GLY increased in the diet. Methane energy loss as a proportion of GE intake tended to respond quadratically (P = 0.10), decreasing from 0 to 10% GLY inclusion and increasing thereafter. As a proportion of GE intake, ME tended to respond quadratically (P = 0.10), increasing from 0 to 10% GLY and then decreasing. As a proportion of GE intake, heat production increased linearly (P = 0.02) as GLY increased in the diet. Additionally, as a proportion of GE intake, retained energy (RE) tended to respond quadratically (P = 0.07), increasing from 0 to 10% GLY inclusion and decreasing thereafter. As a proportion of N intake, urinary and fecal N excretion increased linearly (P < 0.04) as GLY increased in the diet. Furthermore, grams of N retained and N retained as a percent of N intake both decreased linearly (P < 0.02) as GLY increased in the diet. Total DM digestibility tended (P < 0.10) to respond quadratically, increasing at a decreasing rate from 0 to 5% GLY inclusion. Overall, RE tended to decrease as GLY increased in the diet in conjunction with a decrease in N retention, which could indicate
Present and Last Glacial Maximum climates as states of maximum entropy production
Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere
2011-01-01
The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...
Decreasing nonmarital births and strengthening marriage to reduce poverty.
Amato, Paul R; Maynard, Rebecca A
2007-01-01
Since the 1970s, the share of U.S. children growing up in single-parent families has doubled, a trend that has disproportionately affected disadvantaged families. Paul Amato and Rebecca Maynard argue that reversing that trend would reduce poverty in the short-term and, perhaps more important, improve children's growth and development over the long term, thus reducing the likelihood that they would be poor when they grew up. The authors propose school and community programs to help prevent nonmarital births. They also propose to lower divorce rates by offering more educational programs to couples before and during marriage. Amato and Maynard recommend that all school systems offer health and sex education whose primary message is that parenthood is highly problematic for unmarried youth. They also recommend educating young people about methods to prevent unintended pregnancies. Ideally, the federal government would provide tested curriculum models that emphasize both abstinence and use of contraception. All youth should understand that unintended pregnancies are preventable and have enormous costs for the mother, the father, the child, and society. Strengthening marriage, argue the authors, is also potentially an effective strategy for fighting poverty. Researchers consistently find that premarital education improves marital quality and lowers the risk of divorce. About 40 percent of couples about to marry now participate in premarital education. Amato and Maynard recommend doubling that figure to 80 percent and making similar programs available for married couples. Increasing the number of couples receiving services could mean roughly 72,000 fewer divorces each year, or around 65,000 fewer children entering a single-parent family every year because of marital dissolution. After seven or eight years, half a million fewer children would have entered single-parent families through divorce. Efforts to decrease the share of children in single-parent households, say the
Govatski, J. A.; da Luz, M. G. E.; Koehler, M.
2015-01-01
We study the geminated pair dissociation probability φ as function of applied electric field and temperature in energetically disordered nD media. Regardless nD, for certain parameters regions φ versus the disorder degree (σ) displays anomalous minimum (maximum) at low (moderate) fields. This behavior is compatible with a transport energy which reaches a maximum and then decreases to negative values as σ increases. Our results explain the temperature dependence of the persistent photoconductivity in C60 single crystals going through order-disorder transitions. They also indicate how an energetic disorder spatial variation may contribute to higher exciton dissociation in multicomponent donor/acceptor systems.
Modeling the East Asian climate during the last glacial maximum
ZHAO; Ping(赵平); CHEN; Longxun(陈隆勋); ZHOU; Xiuji(周秀骥); GONG; Yuanfa(巩远发); HAN; Yu(韩余)
2003-01-01
Using the CCM3 global climate model of National Center for AtmosphericResearch(NCAR), this paper comparatively analyzes the characteristics of East Asian monsoon and surface water condition and the expansion of glacier on the Qinghai-Xizang(Tibetan) Plateau(QXP) between the present and the last glacial maximum(LGM). It is found that the winter monsoon is remarkably stronger during the LGM than at present in the north part of China and the western Pacific but varies little in the south part of China. The summer monsoon remarkably weakens inSouth China Sea and the south part of China during the LGM and has no remarkable changes in the north part of China between the present and the LGM. Due to thealternations of the monsoons during the LGM, the annual mean precipitation significantly decreases in the northeast of China and the most part of north China and the Loess Plateau and the eastern QXP, which makes the earth surface lose more water and becomes dry, especially in the eastern QXP and the western Loess Plateau. In some areas of the middle QXP the decrease of evaporation at the earth surface causes soil to become wetter during the LGM than at present, which favorsthe water level of local lakes to rise during the LGM. Additionally, compared to the present, the depth of snow cover increases remarkably on the most part of the QXP during the LGM winter. The analysis of equilibrium line altitude(ELA) of glaciers on the QXP, calculated on the basis of the simulated temperature and precipitation, shows that although a less decrease of air temperature was simulated during the LGM in this paper, the balance between precipitation and air temperature associated with the atmospheric physical processes in the model makes the ELA be 300-900 m lower during the LGM than at present, namely going down fromthe present ELA above 5400 m to 4600-5200 m during the LGM, indicating a unified ice sheet on the QXP during the LGM.
Schmidt, Matthias; Dietlein, Markus; Kobe, Carsten; Eschner, Wolfgang; Schicha, Harald [University of Cologne, Department of Nuclear Medicine, Cologne (Germany); Bollschweiler, Elfriede; Moenig, Stefan P.; Vallboehmer, Daniel; Hoelscher, Arnulf [University of Cologne, Department of General-, Visceral and Cancer Surgery, Cologne (Germany)
2009-05-15
To evaluate the potential of [{sup 18}F]fluorodeoxyglucose positron emission tomography (FDG-PET) for the assessment of histopathological response and survival after neoadjuvant radiochemotherapy in patients with oesophageal cancer. In 2005 and 2006, 55 patients (43 men, 12 women; median age 60 years) with locally advanced oesophageal cancer (cT3-4 Nx M0; 24 with squamous cell carcinoma, 31 with adenocarcinoma) underwent transthoracic en bloc oesophagectomy after completion of treatment with cisplatin, 5-fluorouracil, and radiotherapy ad 36 Gy in a prospective clinical trial. Of the 55 patients, 21 (38%) were classified as histopathological responders (<10% vital residual tumour cells) and 34 (62%) as nonresponders. FDG-PET was performed before (PET 1) and 3-4 weeks after the end (PET 2) of radiochemotherapy with assessment of maximum and average standardized uptake values (SUV) for correlation with histopathological response and survival. Histopathological responders had a slightly higher baseline SUV than nonresponders (p<0.0001 between PET 1 and PET 2 for responders and nonresponders) and the decrease was more prominent in responders. Except for SUVmax in patients with squamous cell carcinoma neither baseline nor preoperative SUV nor percent SUV reduction correlated significantly with histopathological response. Histopathological responders had a 2-year overall survival of 91 {+-} 9% and nonresponders a survival of 53 {+-} 10% (p = 0.007). Our study does not support recent reports that FDG-PET predicts histopathological response and survival in patients with locally advanced oesophageal cancer treated by neoadjuvant radiochemotherapy. (orig.)
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
An Interval Maximum Entropy Method for Quadratic Programming Problem
RUI Wen-juan; CAO De-xin; SONG Xie-wu
2005-01-01
With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.
Sievers, M.L.; Fisher, J.R.
1982-10-01
Among Piman and San Carlos Apache Indians, the rates of disseminated disease and death from coccidioidomycosis have declined 67 percent (p less than 0.001) and 71 percent (p less than 0.01), respectively, between the first and second half of a 22-year observation period (1959-1980), despite a lack of significant change in the rate of primary infection (as determined by the coccidioidin skin test) between the two 11-year periods. The two tribal groups studied comprised 76 percent of the Indian population in the endemic area. More than 90 percent of Pimans and San Carlos Apaches have full Indian heritage, and almost all of them have been lifelong inhabitants of the endemic region. There is no evidence that genetic factors are responsible for the Indians' decrease in mortality and mortality from disseminated coccidioidomycosis. Improvements in housing and working conditions appear to have lessened the exposure to dust laden with C immitis, decreased the size of infecting inoculum, and, thereby, contributed to a decline in disseminated coccidiodomycosis among these native Americans, who have often been considered to have increased susceptibility to this fungal infection. Thus, the outcome of coccidioidal infection in American Indians seems to be largely determined by environmental influences. The possibility of decreasing disseminated disease rates by reducing the inhalation of arthroconidia also has important implications for other ethnic groups.
Troubleshooting at Reverse Osmosis performance decrease
Soons, Jan [KEMA (Netherlands)
2011-07-01
There are several causes for a decrease in Reverse Osmosis (RO) membrane performance each of which requiring actions to tackle the possible cause. Two of the main factors affecting the performance of the system are the feed quality (poor feed quality can lead to fouling of the membranes) and the operational conditions (including the maximum allowed pressure, minimum cleaning frequencies and types, recovery rate etc, which should be according to the design conditions). If necessary, pre-treatment will be applied in order to remove the fouling agents from the influent, reduce scaling (through the addition of anti-scalants) and for the protection of the membranes (for example, sodium metabisulphite addition for the removal of residual chlorine which can harm the membranes). Fouling is not strictly limited to the use of surface water as feed water, also relatively clean water sources will, over time, lead to organic and inorganic fouling when cleaning is not optimum. When fouling occurs, the TransMembrane Pressure (TMP) increases and more energy will be needed to produce the same amount of product water. Also, the cleaning rate will increase, reducing the production rate and increasing the chemical consumption and the produced waste streams. Furthermore, the quality of the effluent will decrease (lower rejection rates at higher pressures) and the lifetime of the membranes will decrease. Depending on the type of fouling different cleaning regimes will have to be applied: acidic treatment for inorganic fouling, the addition of bases against organic fouling. Therefore, it is very important to have a clear view of the type of fouling that is occurring, in order to apply the correct treatment methods. Another important aspect to be kept in mind is that the chemistry of the water - in the first place ruled by the feed water composition - can change during passage of the modules, in particular in cases where the RO system consists of two or more RO trains, and where the
Integer Programming Model for Maximum Clique in Graph
YUAN Xi-bo; YANG You; ZENG Xin-hai
2005-01-01
The maximum clique or maximum independent set of graph is a classical problem in graph theory. Combined with Boolean algebra and integer programming, two integer programming models for maximum clique problem,which improve the old results were designed in this paper. Then, the programming model for maximum independent set is a corollary of the main results. These two models can be easily applied to computer algorithm and software, and suitable for graphs of any scale. Finally the models are presented as Lingo algorithms, verified and compared by several examples.
Counterexamples to convergence theorem of maximum-entropy clustering algorithm
于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生
2003-01-01
In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.
Ho-Pham, Lan T; Lai, Thai Q.; Nguyen, Mai T. T.; Nguyen, Tuan V
2015-01-01
Background The burden of obesity in Vietnam has not been well defined because there is a lack of reference data for percent body fat (PBF) in Asians. This study sought to define the relationship between PBF and body mass index (BMI) in the Vietnamese population. Methods The study was designed as a comparative cross-sectional investigation that involved 1217 individuals of Vietnamese background (862 women) aged 20 years and older (average age 47 yr) who were randomly selected from the general ...
J.P. Gastellu- Etcregorry
2013-01-01
The Indonesian spatiotemporal cloud cover distribution was quantified with the aid of GMS, Landsat and SPOT data. Iterative interactive factorial analyses grouped pixels with similar profiles into 18 classes for all land areas. For each class, statistics of Landsat and SPOT images, grouped by class, were used to verify, calibrate and improve class profiles. This led to quantified temporal profiles of probability of acquiring remotely sensed data with 10 , 20 and 30 percent cloud cover, for an...
Brownmiller, C; Howard, L R; Prior, R L
2008-06-01
This study evaluated the effects of processing and 6 mo of storage on total monomeric anthocyanins, percent polymeric color, and antioxidant capacity of blueberries that were canned in syrup (CS), canned in water (CW), pureed, and juiced (clarified and nonclarified). Total monomeric anthocyanins, percent polymeric color, and oxygen radical absorbing capacity (ORAC) assay using fluorescein (ORAC(FL)) were determined postprocessing after 1 d, and 1, 3, and 6 mo of storage. Thermal processing resulted in marked losses in total anthocyanins (28% to 59%) and ORAC(FL) values (43% to 71%) in all products, with the greatest losses occurring in clarified juices and the least in nonclarified juices. Storage at 25 degrees C for 6 mo resulted in dramatic losses in total anthocyanins, ranging from 62% in berries CW to 85% in clarified juices. This coincided with marked increases in percent polymeric color values of these products over the 6-mo storage. The ORAC(FL) values showed little change during storage, indicating that the formation of polymers compensated for the loss of antioxidant capacity due to anthocyanin degradation. Methods are needed to retain anthocyanins in thermally processed blueberries.
Hager, A; Howard, L R; Prior, R L; Brownmiller, C
2008-08-01
This study evaluated the effects of processing and 6 mo of storage on total monomeric anthocyanins, percent polymeric color, and antioxidant capacity of black raspberries that were individually quick-frozen (IQF), canned-in-syrup, canned-in-water, pureed, and juiced (clarified and nonclarified). Total monomeric anthocyanins, percent polymeric color, and ORAC(FL) were determined 1 d postprocessing and after 1, 3, and 6 mo of storage. Thermal processing resulted in marked losses in total anthocyanins ranging from 37% in puree to 69% to 73% in nonclarified and clarified juices, respectively, but only the juices showed substantial losses (38% to 41%) in ORAC(FL). Storage at 25 degrees C of all thermally processed products resulted in dramatic losses in total anthocyanins ranging from 49% in canned-in-syrup to 75% in clarified juices. This coincided with marked increases in percent polymeric color values of these products over the 6-mo storage. ORAC(FL) values showed little change during storage, indicating that the formation of polymers compensated for the loss of antioxidant capacity due to anthocyanin degradation. Total anthocyanins and ORACFL of IQF berries were well retained during long-term storage at -20 degrees C.
Rayleigh-maximum-likelihood bilateral filter for ultrasound image enhancement.
Li, Haiyan; Wu, Jun; Miao, Aimin; Yu, Pengfei; Chen, Jianhua; Zhang, Yufeng
2017-04-17
Ultrasound imaging plays an important role in computer diagnosis since it is non-invasive and cost-effective. However, ultrasound images are inevitably contaminated by noise and speckle during acquisition. Noise and speckle directly impact the physician to interpret the images and decrease the accuracy in clinical diagnosis. Denoising method is an important component to enhance the quality of ultrasound images; however, several limitations discourage the results because current denoising methods can remove noise while ignoring the statistical characteristics of speckle and thus undermining the effectiveness of despeckling, or vice versa. In addition, most existing algorithms do not identify noise, speckle or edge before removing noise or speckle, and thus they reduce noise and speckle while blurring edge details. Therefore, it is a challenging issue for the traditional methods to effectively remove noise and speckle in ultrasound images while preserving edge details. To overcome the above-mentioned limitations, a novel method, called Rayleigh-maximum-likelihood switching bilateral filter (RSBF) is proposed to enhance ultrasound images by two steps: noise, speckle and edge detection followed by filtering. Firstly, a sorted quadrant median vector scheme is utilized to calculate the reference median in a filtering window in comparison with the central pixel to classify the target pixel as noise, speckle or noise-free. Subsequently, the noise is removed by a bilateral filter and the speckle is suppressed by a Rayleigh-maximum-likelihood filter while the noise-free pixels are kept unchanged. To quantitatively evaluate the performance of the proposed method, synthetic ultrasound images contaminated by speckle are simulated by using the speckle model that is subjected to Rayleigh distribution. Thereafter, the corrupted synthetic images are generated by the original image multiplied with the Rayleigh distributed speckle of various signal to noise ratio (SNR) levels and
MedhatAbd El Barr
2016-01-01
Objective: To evaluate exploitation status of the stocks of demersal fishes in Omani artisanal fisheries. Methods: Time-series data between 2005 and 2014 on catches and effort represented by the number of fishing boats were used to estimate catch per unit effort and maximum sustainable yields applying Schaefer surplus production model. Regression analyses were made online using GraphPad software. Results: The study revealed that increasing the number of boats on the fishery caused a decrease of catch per unit effort of some species. Maximum sustainable yields and exploitation status were estimated for these species applying. Conclusions: Some demersal fish species were found to be caught in quantities exceeding maximum sustainable yields during some fishing seasons indicating overexploitation of their stocks.
On the Maximum Storage Capacity of the Hopfield Model
Folli, Viola; Leonetti, Marco; Ruocco, Giancarlo
2017-01-01
Recurrent neural networks (RNN) have traditionally been of great interest for their capacity to store memories. In past years, several works have been devoted to determine the maximum storage capacity of RNN, especially for the case of the Hopfield network, the most popular kind of RNN. Analyzing the thermodynamic limit of the statistical properties of the Hamiltonian corresponding to the Hopfield neural network, it has been shown in the literature that the retrieval errors diverge when the number of stored memory patterns (P) exceeds a fraction (≈ 14%) of the network size N. In this paper, we study the storage performance of a generalized Hopfield model, where the diagonal elements of the connection matrix are allowed to be different from zero. We investigate this model at finite N. We give an analytical expression for the number of retrieval errors and show that, by increasing the number of stored patterns over a certain threshold, the errors start to decrease and reach values below unit for P ≫ N. We demonstrate that the strongest trade-off between efficiency and effectiveness relies on the number of patterns (P) that are stored in the network by appropriately fixing the connection weights. When P≫N and the diagonal elements of the adjacency matrix are not forced to be zero, the optimal storage capacity is obtained with a number of stored memories much larger than previously reported. This theory paves the way to the design of RNN with high storage capacity and able to retrieve the desired pattern without distortions. PMID:28119595
Impact of soil moisture on extreme maximum temperatures in Europe
Kirien Whan
2015-09-01
Full Text Available Land-atmosphere interactions play an important role for hot temperature extremes in Europe. Dry soils may amplify such extremes through feedbacks with evapotranspiration. While previous observational studies generally focused on the relationship between precipitation deficits and the number of hot days, we investigate here the influence of soil moisture (SM on summer monthly maximum temperatures (TXx using water balance model-based SM estimates (driven with observations and temperature observations. Generalized extreme value distributions are fitted to TXx using SM as a covariate. We identify a negative relationship between SM and TXx, whereby a 100 mm decrease in model-based SM is associated with a 1.6 °C increase in TXx in Southern-Central and Southeastern Europe. Dry SM conditions result in a 2–4 °C increase in the 20-year return value of TXx compared to wet conditions in these two regions. In contrast with SM impacts on the number of hot days (NHD, where low and high surface-moisture conditions lead to different variability, we find a mostly linear dependency of the 20-year return value on surface-moisture conditions. We attribute this difference to the non-linear relationship between TXx and NHD that stems from the threshold-based calculation of NHD. Furthermore the employed SM data and the Standardized Precipitation Index (SPI are only weakly correlated in the investigated regions, highlighting the importance of evapotranspiration and runoff for resulting SM. Finally, in a case study for the hot 2003 summer we illustrate that if 2003 spring conditions in Southern-Central Europe had been as dry as in the more recent 2011 event, temperature extremes in summer would have been higher by about 1 °C, further enhancing the already extreme conditions which prevailed in that year.
Estimate of the maximum induced magnetic field in relativistic shocks
Ghorbanalilu, M.; Sadegzadeh, S.
2017-01-01
The proton-driven Weibel instability is a crucial process for amplifying the generated magnetic fields in gamma-ray bursts. An expression for the saturation level of magnetic fields is estimated in a relativistic shock consisting of electron-proton plasmas. Within the shock transition layer, the plasma is modelled with the waterbag and Maxwell-Jüttner distribution functions for asymmetric counter-propagating proton beams and isotropic background electrons, respectively. The proton-driven Weibel-type instability in the linear phase is investigated thoroughly and then the instability conditions and the stabilization mechanisms are considered in details just after the shutdown of the electron Weibel instability. The growth rate of the instability and the saturated magnetic field strength are obtained in terms of the effective proton beam Mach number, asymmetry parameter, and the background electron temperature. In this paper, fully relativistic kinetic treatment is used to formulate the dispersion relation for the proton Weibel-type instability. Then, by using the magnetic trapping criteria, the saturated magnetic field strength is computed. In the present scenario, the instability includes two stages: in the first stage the electron Weibel instability evolves very rapidly, but in the second one because of the free energy stored in the slow counter-propagating proton beams, the instability is further amplified in the context of electrons with an isotropic distribution function. Increment of the growth rate and saturated magnetic field by increasing (decreasing) the effective proton beam Mach number (the asymmetry parameter) is deduced from the results. It is shown that at the temperatures around 108 K a maximum magnetic field up to around 56 G can be detected by this mechanism after the saturation time.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...
49 CFR 174.86 - Maximum allowable operating speed.
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15...
Parametric optimization of thermoelectric elements footprint for maximum power generation
Rezania, A.; Rosendahl, Lasse; Yin, Hao
2014-01-01
The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost-perform...
30 CFR 56.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 56.19066 Section 56.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 56.19066 Maximum riders in a conveyance. In shafts inclined over 45...
30 CFR 57.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 57.19066 Section 57.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 57.19066 Maximum riders in a conveyance. In shafts inclined over 45...
Maximum Atmospheric Entry Angle for Specified Retrofire Impulse
T. N. Srivastava
1969-07-01
Full Text Available Maximum atmospheric entry angles for vehicles initially moving in elliptic orbits are investigated and it is shown that tangential retrofire impulse at the apogee results in the maximum entry angle. Equivalence of maximizing the entry angle and minimizing the retrofire impulse is also established.
5 CFR 838.711 - Maximum former spouse survivor annuity.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the...
46 CFR 151.45-6 - Maximum amount of cargo.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Maximum amount of cargo. 151.45-6 Section 151.45-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES BARGES CARRYING BULK LIQUID HAZARDOUS MATERIAL CARGOES Operations § 151.45-6 Maximum amount of cargo. (a)...
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... rate effective on the date the supplemental annuity begins, before any reduction for a private pension... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52...
49 CFR 195.406 - Maximum operating pressure.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a...
Maximum-entropy clustering algorithm and its global convergence analysis
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Distribution of maximum loss of fractional Brownian motion with drift
Çağlar, Mine; Vardar-Acar, Ceren
2013-01-01
In this paper, we find bounds on the distribution of the maximum loss of fractional Brownian motion with H >= 1/2 and derive estimates on its tail probability. Asymptotically, the tail of the distribution of maximum loss over [0, t] behaves like the tail of the marginal distribution at time t.
48 CFR 436.575 - Maximum workweek-construction schedule.
2010-10-01
...-construction schedule. 436.575 Section 436.575 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE... Maximum workweek-construction schedule. The contracting officer shall insert the clause at 452.236-75, Maximum Workweek-Construction Schedule, if the clause at FAR 52.236-15 is used and the contractor's...
30 CFR 57.5039 - Maximum permissible concentration.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum permissible concentration. 57.5039... Maximum permissible concentration. Except as provided by standard § 57.5005, persons shall not be exposed to air containing concentrations of radon daughters exceeding 1.0 WL in active workings. ...
5 CFR 550.105 - Biweekly maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Biweekly maximum earnings limitation. 550.105 Section 550.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.105 Biweekly...
5 CFR 550.106 - Annual maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Annual maximum earnings limitation. 550.106 Section 550.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.106 Annual...
32 CFR 842.35 - Depreciation and maximum allowances.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide”...
Decreasing clouds drive mass loss on the Greenland Ice Sheet
Hofer, Stefan; Bamber, Jonathan; Tedstone, Andrew; Fettweis, Xavier
2017-04-01
The Greenland ice sheet (GrIS) has been losing mass at an accelerating rate since the mid-1990s. This has been due to both increased ice discharge into the ocean and melting at the surface, with the latter being the dominant contribution. This change in state has been attributed to rising temperatures and a decrease in surface albedo. Here we show, using satellite data and climate model output, that the abrupt reduction in surface mass balance since about 1995 can be largely attributed to a coincident trend of decreasing summer cloud cover. Satellite observations show that, from 1995 to 2009, summer cloud cover decreased by 0.9% ± 0.28%.yr. Model output indicates that the GrIS surface mass balance has a sensitivity of -5.4 ± 2 Gt per percent reduction in summer cloud cover, due principally to the impact of increased shortwave radiation over the low albedo ablation zone. The observed reduction in cloud cover is strongly correlated with a state shift of the North Atlantic Oscillation, suggesting that the enhanced surface mass loss from the GrIS is driven by synoptic-scale changes in Arctic-wide atmospheric circulation.
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Experimental study on prediction model for maximum rebound ratio
LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong
2007-01-01
The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.
5 CFR 582.402 - Maximum garnishment limitations.
2010-01-01
... Section 582.402 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS... disposable earnings subject to garnishment to enforce any legal debt other than an order for child support or...), shall not exceed 25 percent of the employee-obligor's aggregate disposable earnings for any workweek....
U.S. Environmental Protection Agency — High amounts of impervious cover (parking lots, rooftops, roads, etc.) can increase water runoff, which may directly enter surface water. Runoff from roads often...
U.S. Environmental Protection Agency — Forests provide economic and ecological value. High percentages of forest cover (FORPCT) generally indicate healthier ecosystems and cleaner surface water. More...
Panek, Richard
2012-01-01
It is one of the most disturbing aspects of our universe: only four per cent of it consists of the matter that makes up every star, planet, and every book. The rest is completely unknown. Acclaimed science writer Richard Panek tells the story of the handful of scientists who have spent the past few decades on a quest to unlock the secrets of dark matter" and the even stranger substance called dark energy". These are perhaps the greatest mysteries in science,and solving them will reshape our understanding of the universe and our place in it. The stakes could not be higher. Panek's fast-paced
U.S. Environmental Protection Agency — Forests provide economic and ecological value. High percentages of forest cover (FORPCTFuture) generally indicate healthier ecosystems and cleaner surface water....
Maximum flux density of the gyrosynchrotron spectrum in a nonuniform source
Ai-Hua Zhou; Rong-Chuan Wang; Cheng-Wen Shao
2009-01-01
The maximum flux density of a gyrosynchrotron radiation spectrum in a mag- netic dip|oe model with self absorption and gyroresonance is calculated. Our calculations show that the maximum flux density of the gyrosynchrotron spectrum increases with in- creasing low-energy cutoff, number density, input depth of energetic electrons, magnetic field strength and viewing angle, and with decreasing energy spectral index of energetic electrons, number density and temperature of thermal electrons. It is found that there are linear correlations between the logarithms of the maximum flux density and the above eight parameters with correlation coefficients higher than 0.91 and fit accuracies better than 10%. The maximum flux density could be a good indicator of the changes of these source parameters. In addition, we find that there are very good positive linear correla- tions between the logarithms of the maximum flux density and peak frequency when the above former five parameters vary respectively. Their linear correlation coefficients are higher than 0.90 and the fit accuracies are better than 0.5%.
Spectral analysis of the Forbush decrease of 13 July 1982
Vainikka, E.; Torsti, J. J.; Valtonen, E.; Lumme, M.; Nieminen, M.; Peltonen, J.; Arvela, H.
1985-01-01
The maximum entropy method has been applied in the spectral analysis of high-energy cosmic-ray intensity during the large Forbush event of July 13, 1982. An oscillation with period of about 2 hours and amplitude of 1 to 3% was found to be present during the decrease phase. This oscillation can be related to a similar periodicity in the magnetospheric field. However, the variation was not observed at all neutron monitor stations. In the beginning of the recovery phase, the intensity oscillated with a period of about 10 hours and amplitude of 3%.
2010-07-01
... as specified in 40 CFR 1065.610. This is the maximum in-use engine speed used for calculating the NOX... procedures of 40 CFR part 1065, based on the manufacturer's design and production specifications for the..., power density, and maximum in-use engine speed. 1042.140 Section 1042.140 Protection of...
Freedman, David S; Ogden, Cynthia L; Kit, Brian K
2015-11-18
Although the estimation of body fatness by Slaughter skinfold thickness equations (PBF(Slaughter)) has been widely used, the accuracy of this method is uncertain. We have previously examined the interrelationships among the body mass index (BMI), PBF(Slaughter), percent body fat from dual energy X-ray absorptiometry (PBF(DXA)) and CVD risk factor levels among children who were examined in the Bogalusa Heart Study and in the Pediatric Rosetta Body Composition Project. The current analyses examine these associations among 7599 8- to 19-year-olds who participated in the (U.S.) National Health and Nutrition Examination Survey from 1999 to 2004. We analyzed (1) the agreement between (1) estimates of percent body fat calculated from the Slaughter skinfold thickness equations and from DXA, and (2) the relation of lipid, lipoprotein, and blood pressure levels to BMI, PBF(Slaughter) and PBF(DXA). PBF(Slaughter) was highly correlated (r ~ 0.85) with PBF(DXA). However, among children with a relatively low skinfold thicknesses sum (triceps + subscapular), PBF(Slaughter) underestimated PBF(DXA) by 8 to 9 percentage points. In contrast, PBF(Slaughter) overestimated PBF(DXA) by 10 points among boys with a skinfold thickness sum ≥ 50 mm. After adjustment for sex and age, lipid levels were related similarly to the body mass index, PBF(DXA) and PBF(Slaughter). There were, however, small differences in associations with blood pressure levels: systolic blood pressure was more strongly associated with body mass index, but diastolic blood pressure was more strongly associated with percent body fat. The Slaughter equations yield biased estimates of body fatness. In general, lipid and blood pressure levels are related similarly to levels of BMI (following adjustment for sex and age), PBF(Slaughter,) and PBF(DXA).
Can Diuretics Decrease Your Potassium Level?
... and Conditions High blood pressure (hypertension) Can diuretics decrease your potassium level? Answers from Sheldon G. Sheps, ... D. Yes, some diuretics — also called water pills — decrease potassium in the blood. Diuretics are commonly used ...
Maximum frequency of the decametric radiation from Jupiter
Barrow, C. H.; Alexander, J. K.
1980-01-01
The upper frequency limits of Jupiter's decametric radio emission are found to be essentially the same when observed from the earth or, with considerably higher sensitivity, from the Voyager spacecraft close to Jupiter. This suggests that the maximum frequency is a real cut-off corresponding to a maximum gyrofrequency of about 38-40 MHz at Jupiter. It no longer appears to be necessary to specify different cut-off frequencies for the Io and non-Io emission as the maximum frequencies are roughly the same in each case.
Eggleston, Jack
2009-01-01
Due to elevated levels of methylmercury in fish, three streams in the Shenandoah Valley of Virginia have been placed on the State's 303d list of contaminated waters. These streams, the South River, the South Fork Shenandoah River, and parts of the Shenandoah River, are downstream from the city of Waynesboro, where mercury waste was discharged from 1929-1950 at an industrial site. To evaluate mercury contamination in fish, this total maximum daily load (TMDL) study was performed in a cooperative effort between the U.S. Geological Survey, the Virginia Department of Environmental Quality, and the U.S. Environmental Protection Agency. The investigation focused on the South River watershed, a headwater of the South Fork Shenandoah River, and extrapolated findings to the other affected downstream rivers. A numerical model of the watershed, based on Hydrological Simulation Program-FORTRAN (HSPF) software, was developed to simulate flows of water, sediment, and total mercury. Results from the investigation and numerical model indicate that contaminated flood-plain soils along the riverbank are the largest source of mercury to the river. Mercury associated with sediment accounts for 96 percent of the annual downstream mercury load (181 of 189 kilograms per year) at the mouth of the South River. Atmospherically deposited mercury contributes a smaller load (less than 1 percent) as do point sources, including current discharge from the historic industrial source area. In order to determine how reductions of mercury loading to the stream could reduce methylmercury concentrations in fish tissue below the U.S. Environmental Protection Agency criterion of 0.3 milligrams per kilogram, multiple scenarios were simulated. Bioaccumulation of mercury was expressed with a site-specific exponential relation between aqueous total mercury and methylmercury in smallmouth bass, the indicator fish species. Simulations indicate that if mercury loading were to decrease by 98.9 percent from 189
Henry Finlay Godson
2016-01-01
Full Text Available The advent of modern technologies in radiotherapy poses an increased challenge in the determination of dosimetric parameters of small fields that exhibit a high degree of uncertainty. Percent depth dose and beam profiles were acquired using different detectors in two different orientations. The parameters such as relative surface dose (DS, depth of dose maximum (Dmax, percentage dose at 10 cm (D10, penumbral width, flatness, and symmetry were evaluated with different detectors. The dosimetric data were acquired for fields defined by jaws alone, multileaf collimator (MLC alone, and by MLC while the jaws were positioned at 0, 0.25, 0.5, and 1.0 cm away from MLC leaf-end using a Varian linear accelerator with 6 MV photon beam. The accuracy in the measurement of dosimetric parameters with various detectors for three different field definitions was evaluated. The relative DS(38.1% with photon field diode in parallel orientation was higher than electron field diode (EFD (27.9% values for 1 cm ×1 cm field. An overestimation of 5.7% and 8.6% in D10depth were observed for 1 cm ×1 cm field with RK ion chamber in parallel and perpendicular orientation, respectively, for the fields defined by MLC while jaw positioned at the edge of the field when compared to EFD values in parallel orientation. For this field definition, the in-plane penumbral widths obtained with ion chamber in parallel and perpendicular orientation were 3.9 mm, 5.6 mm for 1 cm ×1 cm field, respectively. Among all detectors used in the study, the unshielded diodes were found to be an appropriate choice of detector for the measurement of beam parameters in small fields.
Centi, Amanda J; Booth, Sarah L; Gundberg, Caren M; Saltzman, Edward; Nicklas, Barbara; Shea, M Kyla
2015-12-01
Osteocalcin (OC) is a vitamin K-dependent bone protein used as a marker of bone formation. Mouse models have demonstrated a role for the uncarboxylated form of OC (ucOC) in energy metabolism, including energy expenditure and adiposity, but human data are equivocal. The purpose of this study was to determine the associations between changes in measures of OC and changes in body weight and percent body fat in obese, but otherwise healthy post-menopausal women undergoing a 20-week weight loss program. All participants received supplemental vitamins K and D and calcium. Body weight and body fat percentage (%BF) were assessed before and after the intervention. Serum OC [(total (tOC), ucOC, percent uncarboxylated (%ucOC)], and procollagen type 1N-terminal propeptide (P1NP; a measure of bone formation) were measured. Women lost an average of 10.9 ± 3.9 kg and 4 %BF. Serum concentrations of tOC, ucOC, %ucOC, and P1NP did not significantly change over the twenty-week intervention, nor were these measures associated with changes in weight (all p > 0.27) or %BF (all p > 0.54). Our data do not support an association between any serum measure of OC and weight or %BF loss in post-menopausal women supplemented with nutrients implicated in bone health.
Sarah H Peterson
Full Text Available Persistent organic pollutants, including polychlorinated biphenyls (PCBs, are widely distributed and detectable far from anthropogenic sources. Northern elephant seals (Mirounga angustirostris biannually travel thousands of kilometers to forage in coastal and open-ocean regions of the northeast Pacific Ocean and then return to land where they fast while breeding and molting. Our study examined potential effects of age, adipose percent, and the difference between the breeding and molting fasts on PCB concentrations and congener profiles in blubber and serum of northern elephant seal females. Between 2005 and 2007, we sampled blubber and blood from 58 seals before and after a foraging trip, which were then analyzed for PCBs. Age did not significantly affect total PCB concentrations; however, the proportion of PCB congeners with different numbers of chlorine atoms was significantly affected by age, especially in the outer blubber. Younger adult females had a significantly greater proportion of low-chlorinated PCBs (tri-, tetra-, and penta-CBs than older females, with the opposite trend observed for hepta-CBs, indicating that an age-associated process such as parity (birth may significantly affect congener profiles. The percent of adipose tissue had a significant relationship with inner blubber PCB concentrations, with the highest mean concentrations observed at the end of the molting fast. These results highlight the importance of sampling across the entire blubber layer when assessing contaminant levels in phocid seals and taking into account the adipose stores and reproductive status of an animal when conducting contaminant research.
Seabury, Seth A; Helland, Eric; Jena, Anupam B
2014-11-01
The impact of medical malpractice reforms on the average size of malpractice payments in specific physician specialties is unknown and subject to debate. We analyzed a national sample of malpractice claims for the period 1985-2010, merged with information on state liability reforms, to estimate the impact of state noneconomic damages caps on average malpractice payment size for physicians overall and for ten different specialty categories. We then compared how the effects differed according to the restrictiveness of the cap ($250,000 versus $500,000). We found that, overall, noneconomic damages caps reduced average payments by $42,980 (15 percent), compared to having no cap at all. A more restrictive $250,000 cap reduced average payments by $59,331 (20 percent), and a less restrictive $500,000 cap had no significant effect, compared to no cap at all. The effect of the caps overall varied according to specialty, with the largest impact being on claims involving pediatricians and the smallest on claims involving surgical subspecialties and ophthalmologists.
Abidullah ABID
2017-02-01
Full Text Available Low savings by the bottom 40 percent Bumiputera triggered low wealth accumulation and greater wealth inequality. The issue behind the low savings is the increase in food prices, taxes, and interest rates for the borrowers. In response to these problems, the Malaysian government provides cash transfer BRIM as one of redistributive measures. However, it is still not enough as many of the bottom 40 percent are neglected to avail the facility. In such circumstances, the role of community-based cash and credit transfer schemes such as cash waqf can be more fruitful. However due to inefficiency of the awqaf institutions regarding financial management, the system cannot contribute effectively. Therefore, the main aim of the paper is to provide an efficient mechanism which can ensure effectiveness in terms of transparency, collection and distribution of cash endowments and its benefits. Hence, by means of library research, we have proposed a conceptual framework. It is suggested that collection and distribution of cash waqf through crowdfunding platform would solve both transparency, collection and distribution issues. This study would provide a ground for those researchers and practitioners who are working on finding the right approach to implement waqf in more efficient way.
Herda, Trent J; Walter, Ashley A; Costa, Pablo B; Ryan, Eric D; Hoge, Katherine M; Stout, Jeffrey R; Cramer, Joel T
2011-10-01
The purpose of this study was to examine the sensitivity and peak force prediction capability of the interpolated twitch technique (ITT) performed during submaximal and maximal voluntary contractions (MVCs) in subjects with the ability to maximally activate their plantar flexors. Twelve subjects performed two MVCs and nine submaximal contractions with the ITT method to calculate percent voluntary inactivation (%VI). Additionally, two MVCs were performed without the ITT. Polynomial models (linear, quadratic and cubic) were applied to the 10-90% VI and 40-90% VI versus force relationships to predict force. Peak force from the ITT MVC was 6.7% less than peak force from the MVC without the ITT. Fifty-eight percent of the 10-90% VI versus force relationships were best fit with nonlinear models; however, all 40-90% VI versus force relationships were best fit with linear models. Regardless of the polynomial model or the contraction intensities used to predict force, all models underestimated the actual force from 22% to 28%. There was low sensitivity of the ITT method at high contraction intensities and the predicted force from polynomial models significantly underestimated the actual force. Caution is warranted when interpreting the % VI at high contraction intensities and predicted peak force from submaximal contractions.
Schale, Stephen P; Le, Trang M; Pierce, Karisa M
2012-05-30
The two main goals of the analytical method described herein were to (1) use principal component analysis (PCA), hierarchical clustering (HCA) and K-nearest neighbors (KNN) to determine the feedstock source of blends of biodiesel and conventional diesel (feedstocks were two sources of soy, two strains of jatropha, and a local feedstock) and (2) use a partial least squares (PLS) model built specifically for each feedstock to determine the percent composition of the blend. The chemometric models were built using training sets composed of total ion current chromatograms from gas chromatography-quadrupole mass spectrometry (GC-qMS) using a polar column. The models were used to semi-automatically determine feedstock and blend percent composition of independent test set samples. The PLS predictions for jatropha blends had RMSEC=0.6, RMSECV=1.2, and RMSEP=1.4. The PLS predictions for soy blends had RMSEC=0.5, RMSECV=0.8, and RMSEP=1.2. The average relative error in predicted test set sample compositions was 5% for jatropha blends and 4% for soy blends. Copyright © 2012 Elsevier B.V. All rights reserved.
Model of optical response of marine aerosols to Forbush decreases
T. Bondo
2009-10-01
Full Text Available In order to elucidate the effect of galactic cosmic rays on cloud formation, we investigate the optical response of marine aerosols to Forbush decreases – abrupt decreases in galactic cosmic rays – by means of modeling. We vary the nucleation rate of new aerosols, in a sectional coagulation and condensation model, according to changes in ionization by the Forbush decrease. From the resulting size distribution we then calculate the aerosol optical thickness and Angstrom exponent, for the wavelength pairs 350, 450 nm and 550, 900 nm. For the shorter wavelength pair we observe a change in Angstrom exponent, following the Forbush Decrease, of −6 to +3% in the cases with atmospherically realistic output parameters. For some parameters we also observe a delay in the change of Angstrom exponent, compared to the maximum of the Forbush decrease, which is caused by different sensitivities of the probing wavelengths to changes in aerosol number concentration and size. For the long wavelengths these changes are generally smaller. The types and magnitude of change is investigated for a suite of nucleation rates, condensable gas production rates, and aerosol loss rates. Furthermore we compare the model output with observations of 5 of the largest Forbush decreases after year 2000. For the 350, 450 nm pair we use AERONET data and find a comparable change in signal while the Angstrom Exponent is lower in the model than in the data, due to AERONET being mainly sampled over land. For 550, 900 nm we compare with both AERONET and MODIS and find little to no response in both model and observations. In summary our study shows that the optical properties of aerosols show a distinct response to Forbush Decreases, assuming that the nucleation of fresh aerosols is driven by ions. Shorter wavelengths seem more favorable for observing these effects and great care should be taken when analyzing observations, in order to avoid the signal being drowned out by noise.
Model of optical response of marine aerosols to Forbush decreases
T. Bondo
2010-03-01
Full Text Available In order to elucidate the effect of galactic cosmic rays on cloud formation, we investigate the optical response of marine aerosols to Forbush decreases – abrupt decreases in galactic cosmic rays – by means of modeling. We vary the nucleation rate of new aerosols, in a sectional coagulation and condensation model, according to changes in ionization by the Forbush decrease. From the resulting size distribution we then calculate the aerosol optical thickness and Angstrom exponent, for the wavelength pairs 350, 450 nm and 550, 900 nm. In the cases where the output parameters from the model seem to compare best with atmospheric observations we observe, for the shorter wavelength pair, a change in Angstrom exponent, following the Forbush Decrease, of −6 to +3%. In some cases we also observe a delay in the change of Angstrom exponent, compared to the maximum of the Forbush decrease, which is caused by different sensitivities of the probing wavelengths to changes in aerosol number concentration and size. For the long wavelengths these changes are generally smaller. The types and magnitude of change is investigated for a suite of nucleation rates, condensable gas production rates, and aerosol loss rates. Furthermore we compare the model output with observations of 5 of the largest Forbush decreases after year 2000. For the 350, 450 nm pair we use AERONET data and find a comparable change in signal while the Angstrom Exponent is lower in the model than in the data, due to AERONET being mainly sampled over land. For 550, 900 nm we compare with both AERONET and MODIS and find little to no response in both model and observations. In summary our study shows that the optical properties of aerosols show a distinct response to Forbush Decreases, assuming that the nucleation of fresh aerosols is driven by ions. Shorter wavelengths seem more favorable for observing these effects and great care should be taken when analyzing observations, in order to avoid
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling; Wang Jun
2013-01-01
In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain...
Maximum Principle for Nonlinear Cooperative Elliptic Systems on IR N
LEADI Liamidi; MARCOS Aboubacar
2011-01-01
We investigate in this work necessary and sufficient conditions for having a Maximum Principle for a cooperative elliptic system on the whole (IR)N.Moreover,we prove the existence of solutions by an approximation method for the considered system.
Maximum Likelihood Factor Structure of the Family Environment Scale.
Fowler, Patrick C.
1981-01-01
Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)
Multiresolution maximum intensity volume rendering by morphological adjunction pyramids
Roerdink, Jos B.T.M.
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Multiresolution Maximum Intensity Volume Rendering by Morphological Adjunction Pyramids
Roerdink, Jos B.T.M.
2001-01-01
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Changes in context and perception of maximum reaching height.
Wagman, Jeffrey B; Day, Brian M
2014-01-01
Successfully performing a given behavior requires flexibility in both perception and behavior. In particular, doing so requires perceiving whether that behavior is possible across the variety of contexts in which it might be performed. Three experiments investigated how (changes in) context (ie point of observation and intended reaching task) influenced perception of maximum reaching height. The results of experiment 1 showed that perceived maximum reaching height more closely reflected actual reaching ability when perceivers occupied a point of observation that was compatible with that required for the reaching task. The results of experiments 2 and 3 showed that practice perceiving maximum reaching height from a given point of observation improved perception of maximum reaching height from a different point of observation, regardless of whether such practice occurred at a compatible or incompatible point of observation. In general, such findings show bounded flexibility in perception of affordances and are thus consistent with a description of perceptual systems as smart perceptual devices.
Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)
U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Maximum Photovoltaic Penetration Levels on Typical Distribution Feeders: Preprint
Hoke, A.; Butler, R.; Hambrick, J.; Kroposki, B.
2012-07-01
This paper presents simulation results for a taxonomy of typical distribution feeders with various levels of photovoltaic (PV) penetration. For each of the 16 feeders simulated, the maximum PV penetration that did not result in steady-state voltage or current violation is presented for several PV location scenarios: clustered near the feeder source, clustered near the midpoint of the feeder, clustered near the end of the feeder, randomly located, and evenly distributed. In addition, the maximum level of PV is presented for single, large PV systems at each location. Maximum PV penetration was determined by requiring that feeder voltages stay within ANSI Range A and that feeder currents stay within the ranges determined by overcurrent protection devices. Simulations were run in GridLAB-D using hourly time steps over a year with randomized load profiles based on utility data and typical meteorological year weather data. For 86% of the cases simulated, maximum PV penetration was at least 30% of peak load.
16 CFR 1505.8 - Maximum acceptable material temperatures.
2010-01-01
... Association, 155 East 44th Street, New York, NY 10017. Material Degrees C. Degrees F. Capacitors (1) (1) Class... capacitor has no marked temperature limit, the maximum acceptable temperature will be assumed to be 65...
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC GIS Inventory (aka Ramona) — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
PREDICTION OF MAXIMUM DRY DENSITY OF LOCAL GRANULAR ...
methods. A test on a soil of relatively high solid density revealed that the developed relation looses ... where, Pd max is the laboratory maximum dry ... Addis-Jinima Road Rehabilitation. ..... data sets that differ considerably in the magnitude.
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC Education | GIS Inventory — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
Solar Panel Maximum Power Point Tracker for Power Utilities
Sandeep Banik,
2014-01-01
Full Text Available ―Solar Panel Maximum Power Point Tracker For power utilities‖ As the name implied, it is a photovoltaic system that uses the photovoltaic array as a source of electrical power supply and since every photovoltaic (PV array has an optimum operating point, called the maximum power point, which varies depending on the insolation level and array voltage. A maximum power point tracker (MPPT is needed to operate the PV array at its maximum power point. The objective of this thesis project is to build a photovoltaic (PV array Of 121.6V DC Voltage(6 cell each 20V, 100watt And convert the DC voltage to Single phase 120v,50Hz AC voltage by switch mode power converter‘s and inverter‘s.
A Family of Maximum SNR Filters for Noise Reduction
Huang, Gongping; Benesty, Jacob; Long, Tao;
2014-01-01
This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
On the maximum sufficient range of interstellar vessels
Cartin, Daniel
2011-01-01
This paper considers the likely maximum range of space vessels providing the basis of a mature interstellar transportation network. Using the principle of sufficiency, it is argued that this range will be less than three parsecs for the average interstellar vessel. This maximum range provides access from the Solar System to a large majority of nearby stellar systems, with total travel distances within the network not excessively greater than actual physical distance.
Efficiency at Maximum Power of Interacting Molecular Machines
Golubeva, Natalia; Imparato, Alberto
2012-01-01
We investigate the efficiency of systems of molecular motors operating at maximum power. We consider two models of kinesin motors on a microtubule: for both the simplified and the detailed model, we find that the many-body exclusion effect enhances the efficiency at maximum power of the many- motor...... system, with respect to the single motor case. Remarkably, we find that this effect occurs in a limited region of the system parameters, compatible with the biologically relevant range....
Filtering Additive Measurement Noise with Maximum Entropy in the Mean
Gzyl, Henryk
2007-01-01
The purpose of this note is to show how the method of maximum entropy in the mean (MEM) may be used to improve parametric estimation when the measurements are corrupted by large level of noise. The method is developed in the context on a concrete example: that of estimation of the parameter in an exponential distribution. We compare the performance of our method with the bayesian and maximum likelihood approaches.
The maximum entropy production principle: two basic questions.
Martyushev, Leonid M
2010-05-12
The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.
A tropospheric ozone maximum over the equatorial Southern Indian Ocean
L. Zhang
2012-05-01
Full Text Available We examine the distribution of tropical tropospheric ozone (O_{3} from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O_{3} during 2005 to 2009 reveal a distinct, persistent O_{3} maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O_{3} observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O_{3} maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O_{3} maximum is dominated by the O_{3} production driven by lightning nitrogen oxides (NO_{x} emissions, which accounts for 62% of the tropospheric column O_{3} in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O_{3} maximum are rather small. The O_{3} productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O_{3} maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O_{3} maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization
Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue
2010-01-01
This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
CO2 maximum in the oxygen minimum zone (OMZ)
Paulmier, A.; Ruiz-Pino, D.; Garçon, V.
2011-02-01
Oxygen minimum zones (OMZs), known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG) more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC) structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O22225 μmol kg-1, up to 2350 μmol kg-1) have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ). Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%), meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios). This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect) and occurring upstream in warm waters (e.g., in the Equatorial Divergence), where the CMZ-OMZ core originates. The "carbon deficit" in the CMZ core would be mainly compensated locally at the oxycline, by a "carbon excess" induced by a specific remineralization. Indeed, a possible co-existence of bacterial heterotrophic and autotrophic processes usually occurring at different depths could
CO2 maximum in the oxygen minimum zone (OMZ
V. Garçon
2011-02-01
Full Text Available Oxygen minimum zones (OMZs, known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2−1 in the open ocean. To achieve this, we examine simultaneous DIC and O2 data collected off Chile during 4 cruises (2000–2002 and a monthly monitoring (2000–2001 in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg−1, up to 2350 μmol kg−1 have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ. Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%, meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios. This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect and occurring upstream in warm waters (e.g., in the Equatorial Divergence
Estimating Metabolic Fluxes Using a Maximum Network Flexibility Paradigm
Megchelenbrink, Wout; Rossell, Sergio; Huynen, Martijn A.
2015-01-01
Motivation Genome-scale metabolic networks can be modeled in a constraint-based fashion. Reaction stoichiometry combined with flux capacity constraints determine the space of allowable reaction rates. This space is often large and a central challenge in metabolic modeling is finding the biologically most relevant flux distributions. A widely used method is flux balance analysis (FBA), which optimizes a biologically relevant objective such as growth or ATP production. Although FBA has proven to be highly useful for predicting growth and byproduct secretion, it cannot predict the intracellular fluxes under all environmental conditions. Therefore, alternative strategies have been developed to select flux distributions that are in agreement with experimental “omics” data, or by incorporating experimental flux measurements. The latter, unfortunately can only be applied to a limited set of reactions and is currently not feasible at the genome-scale. On the other hand, it has been observed that micro-organisms favor a suboptimal growth rate, possibly in exchange for a more “flexible” metabolic network. Instead of dedicating the internal network state to an optimal growth rate in one condition, a suboptimal growth rate is used, that allows for an easier switch to other nutrient sources. A small decrease in growth rate is exchanged for a relatively large gain in metabolic capability to adapt to changing environmental conditions. Results Here, we propose Maximum Metabolic Flexibility (MMF) a computational method that utilizes this observation to find the most probable intracellular flux distributions. By mapping measured flux data from central metabolism to the genome-scale models of Escherichia coli and Saccharomyces cerevisiae we show that i) indeed, most of the measured fluxes agree with a high adaptability of the network, ii) this result can be used to further reduce the space of feasible solutions iii) this reduced space improves the quantitative predictions
Estimating Metabolic Fluxes Using a Maximum Network Flexibility Paradigm.
Wout Megchelenbrink
Full Text Available Genome-scale metabolic networks can be modeled in a constraint-based fashion. Reaction stoichiometry combined with flux capacity constraints determine the space of allowable reaction rates. This space is often large and a central challenge in metabolic modeling is finding the biologically most relevant flux distributions. A widely used method is flux balance analysis (FBA, which optimizes a biologically relevant objective such as growth or ATP production. Although FBA has proven to be highly useful for predicting growth and byproduct secretion, it cannot predict the intracellular fluxes under all environmental conditions. Therefore, alternative strategies have been developed to select flux distributions that are in agreement with experimental "omics" data, or by incorporating experimental flux measurements. The latter, unfortunately can only be applied to a limited set of reactions and is currently not feasible at the genome-scale. On the other hand, it has been observed that micro-organisms favor a suboptimal growth rate, possibly in exchange for a more "flexible" metabolic network. Instead of dedicating the internal network state to an optimal growth rate in one condition, a suboptimal growth rate is used, that allows for an easier switch to other nutrient sources. A small decrease in growth rate is exchanged for a relatively large gain in metabolic capability to adapt to changing environmental conditions.Here, we propose Maximum Metabolic Flexibility (MMF a computational method that utilizes this observation to find the most probable intracellular flux distributions. By mapping measured flux data from central metabolism to the genome-scale models of Escherichia coli and Saccharomyces cerevisiae we show that i indeed, most of the measured fluxes agree with a high adaptability of the network, ii this result can be used to further reduce the space of feasible solutions iii this reduced space improves the quantitative predictions made by FBA and
Volumetric Concentration Maximum of Cohesive Sediment in Waters: A Numerical Study
Jisun Byun
2014-12-01
Full Text Available Cohesive sediment has different characteristics compared to non-cohesive sediment. The density and size of a cohesive sediment aggregate (a so-called, floc continuously changes through the flocculation process. The variation of floc size and density can cause a change of volumetric concentration under the condition of constant mass concentration. This study investigates how the volumetric concentration is affected by different conditions such as flow velocity, water depth, and sediment suspension. A previously verified, one-dimensional vertical numerical model is utilized here. The flocculation process is also considered by floc in the growth type flocculation model. Idealized conditions are assumed in this study for the numerical experiments. The simulation results show that the volumetric concentration profile of cohesive sediment is different from the Rouse profile. The volumetric concentration decreases near the bed showing the elevated maximum in the cases of both current and oscillatory flow. The density and size of floc show the minimum and the maximum values near the elevation of volumetric concentration maximum, respectively. This study also shows that the flow velocity and the critical shear stress have significant effects on the elevated maximum of volumetric concentration. As mechanisms of the elevated maximum, the strong turbulence intensity and increased mass concentration are considered because they cause the enhanced flocculation process. This study uses numerical experiments. To the best of our knowledge, no laboratory or field experiments on the elevated maximum have been carried out until now. It is of great necessity to conduct well-controlled laboratory experiments in the near future.
Imam Santoso
2010-06-01
Full Text Available Extraction of silver (I has been studied from black/white printing photographic waste by emulsion liquid membrane technique. Composition emulsion at the membrane phase was cerosene as solvent, sorbitan monooleat (span 80 as surfactant, dimethyldioctadesyl-ammonium bromide as carrier and as internal phase was HNO3. Optimum condition was obtained: ratio of internal phase volume and membrane phase volume was 1:1 : concentration of surfactant was 2% (v/v : time of making emulsion was 20 second : rate of stiring emulsion was 1100 rpm : rest time emulsion was 3 second : rate of emulsion volume and external phase volume was 1:5 : emulsion contact rate 500 rpm : emulsion contact time was 40 second : concentration of silver thiosulfate as external phase was 100 ppm : pH of external phase was 3 and pH of internal phase was 1. Optimum condition was applied in silver(I extraction from black/white printing photographic waste. It was obtained 77.33% average which 56.06% silver (I average of internal phase and 22.66% in the external phase. Effect of matrices ion decreased silver(I percent extraction from 96,37% average to 77.33% average. Keyword: photographics waste, silver extraction