49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
32 CFR 842.35 - Depreciation and maximum allowances.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide”...
49 CFR 174.86 - Maximum allowable operating speed.
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15...
Scientific substantination of maximum allowable concentration of fluopicolide in water
Pelo I.М.
2014-03-01
Full Text Available In order to substantiate fluopicolide maximum allowable concentration in the water of water reservoirs the research was carried out. Methods of study: laboratory hygienic experiment using organoleptic and sanitary-chemical, sanitary-toxicological, sanitary-microbiological and mathematical methods. The results of fluopicolide influence on organoleptic properties of water, sanitary regimen of reservoirs for household purposes were given and its subthreshold concentration in water by sanitary and toxicological hazard index was calculated. The threshold concentration of the substance by the main hazard criteria was established, the maximum allowable concentration in water was substantiated. The studies led to the following conclusions: fluopicolide threshold concentration in water by organoleptic hazard index (limiting criterion – the smell – 0.15 mg/dm3, general sanitary hazard index (limiting criteria – impact on the number of saprophytic microflora, biochemical oxygen demand and nitrification – 0.015 mg/dm3, the maximum noneffective concentration – 0.14 mg/dm3, the maximum allowable concentration - 0.015 mg/dm3.
Maximum Allowable Dynamic Load of Mobile Manipulators with Stability Consideration
Heidary H. R.
2015-09-01
Full Text Available High payload to mass ratio is one of the advantages of mobile robot manipulators. In this paper, a general formula for finding the maximum allowable dynamic load (MADL of wheeled mobile robot is presented. Mobile manipulators operating in field environments will be required to manipulate large loads, and to perform such tasks on uneven terrain, which may cause the system to reach dangerous tip-over instability. Therefore, the method is expanded for finding the MADL of mobile manipulators with stability consideration. Moment-Height Stability (MHS criterion is used as an index for the system stability. Full dynamic model of wheeled mobile base and mounted manipulator is considered with respect to the dynamic of non-holonomic constraint. Then, a method for determination of the maximum allowable loads is described, subject to actuator constraints and by imposing the stability limitation as a new constraint. The actuator torque constraint is applied by using a speed-torque characteristics curve of a typical DC motor. In order to verify the effectiveness of the presented algorithm, several simulation studies considering a two-link planar manipulator, mounted on a mobile base are presented and the results are discussed.
Evaluation of maximum allowable temperature inside basket of dry storage module for CANDU spent fuel
Lee, Kyung Ho; Yoon, Jeong Hyoun; Chae, Kyoung Myoung; Choi, Byung Il; Lee, Heung Young; Song, Myung Jae [Nuclear Environment Technology Institute, Taejon (Korea, Republic of); Cho, Gyu Seong [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)
2002-10-01
This study provides a maximum allowable fuel temperature through a preliminary evaluation of the UO{sub 2} weight gain that may occur on a failed (breached sheathing) element of a fuel bundle. Intact bundles would not be affected as the UO{sub 2} would not be in contact with the air for the fuel storage basket. The analysis is made for the MACSTOR/KN-400 to be operated in Wolsong ambient air temperature conditions. The design basis fuel is a 6-year cooled fuel bundle that, on average has reached a burnup of 7,800 MWd/MTU. The fuel bundle considered for analysis is assumed to have a high burnup of 12,000 MWd/MTU and be located in a hot basket. The MACSTOR/KN-400 has the same air circuit as the MACSTOR and the air circuit will require a slightly higher temperature difference to exit the increased heat load. The maximum temperature of a high burnup bundle stored in the new MACSTOR/KN-400 is expected to be about 9 .deg. C higher than the fuel temperature of the MACSTOR at an equivalent constant ambient temperature. This temperature increase will in turn increase the UO{sub 2} weight gain from 0.06% (MACSTOR for Wolsong conditions) to an estimated 0.13% weight gain for the MACSTOR/KN-400. Compared to an acceptable UO{sub 2} weight gain of 0.6%, we are thus expecting to maintain a very acceptable safety factor of 4 to 5 for the new module against unacceptable stresses in the fuel sheathing. For the UO{sub 2} weight gain, the maximum allowable fuel temperature was shown by 164 .deg. C.
47 CFR 65.700 - Determining the maximum allowable rate of return.
2010-10-01
... CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Maximum Allowable Rates of Return § 65.700 Determining the maximum allowable rate of return. (a) The maximum allowable rate of return for any exchange carrier's earnings on any access service category shall...
46 CFR 52.01-55 - Increase in maximum allowable working pressure.
2010-10-01
... 46 Shipping 2 2010-10-01 2010-10-01 false Increase in maximum allowable working pressure. 52.01-55... POWER BOILERS General Requirements § 52.01-55 Increase in maximum allowable working pressure. (a) When the maximum allowable working pressure of a boiler has been established, an increase in the pressure...
Approximating the maximum weight clique using replicator dynamics.
Bomze, I R; Pelillo, M; Stix, V
2000-01-01
Given an undirected graph with weights on the vertices, the maximum weight clique problem (MWCP) is to find a subset of mutually adjacent vertices (i.e., a clique) having the largest total weight. This is a generalization of the classical problem of finding the maximum cardinality clique of an unweighted graph, which arises as a special case of the MWCP when all the weights associated to the vertices are equal. The problem is known to be NP-hard for arbitrary graphs and, according to recent theoretical results, so is the problem of approximating it within a constant factor. Although there has recently been much interest around neural-network algorithms for the unweighted maximum clique problem, no effort has been directed so far toward its weighted counterpart. In this paper, we present a parallel, distributed heuristic for approximating the MWCP based on dynamics principles developed and studied in various branches of mathematical biology. The proposed framework centers around a recently introduced continuous characterization of the MWCP which generalizes an earlier remarkable result by Motzkin and Straus. This allows us to formulate the MWCP (a purely combinatorial problem) in terms of a continuous quadratic programming problem. One drawback associated with this formulation, however, is the presence of "spurious" solutions, and we present characterizations of these solutions. To avoid them we introduce a new regularized continuous formulation of the MWCP inspired by previous works on the unweighted problem, and show how this approach completely solves the problem. The continuous formulation of the MWCP naturally maps onto a parallel, distributed computational network whose dynamical behavior is governed by the so-called replicator equations. These are dynamical systems introduced in evolutionary game theory and population genetics to model evolutionary processes on a macroscopic scale.We present theoretical results which guarantee that the solutions provided by
The subsequence weight distribution of summed maximum length digital sequences
Weathers, G. D.; Graf, E. R.; Wallace, G. R.
1974-01-01
An attempt is made to develop mathematical formulas to provide the basis for the design of pseudorandom signals intended for applications requiring accurate knowledge of the statistics of the signals. The analysis approach involves calculating the first five central moments of the weight distribution of subsequences of hybrid-sum sequences. The hybrid-sum sequence is formed from the modulo-two sum of k maximum length sequences and is an extension of the sum sequences formed from two maximum length sequences that Gilson (1966) evaluated. The weight distribution of the subsequences serves as an approximation to the filtering process. The basic reason for the analysis of hybrid-sum sequences is to establish a large group of sequences with good statistical properties. It is shown that this can be accomplished much more efficiently using the hybrid-sum approach rather than forming the group strictly from maximum length sequences.
Exploring the Constrained Maximum Edge-weight Connected Graph Problem
Zhen-ping Li; Shi-hua Zhang; Xiang-Sun Zhang; Luo-nan Chen
2009-01-01
Given an edge weighted graph,the maximum edge-weight connected graph (MECG) is a connected subgraph with a given number of edges and the maximal weight sum.Here we study a special case,i.e.the Constrained Maximum Edge-Weight Connected Graph problem (CMECG),which is an MECG whose candidate subgraphs must include a given set of k edges,then also called the k-CMECG.We formulate the k-CMECG into an integer linear programming model based on the network flow problem.The k-CMECG is proved to be NP-hard.For the special case 1-CMECG,we propose an exact algorithm and a heuristic algorithm respectively.We also propose a heuristic algorithm for the k-CMECG problem.Some simulations have been done to analyze the quality of these algorithms.Moreover,we show that the algorithm for 1-CMECG problem can lead to the solution of the general MECG problem.
COMPASS' new magnet is placed inside the experiment, which will allow for maximum acceptance
Maximilien Brice
2005-01-01
A new magnet at CERN is going to allow COMPASS (Common Muon Proton Apparatus for Structure and Spectroscopy) maximum acceptance. Thanks to the 5 tonne, 2.5 m long magnet, which arrived last December, many more events are expected compared to the previous data-taking
2012-09-13
... Paperwork Reduction Act (44 U.S.C. 3501 et seq.); Is certified as not having a significant economic impact... into the new Missouri rule include: --10 CSR 10-2.040, Maximum Allowable Emission of Particulate Matter from Fuel Burning Equipment Used for Indirect Heating, for the Kansas City Metropolitan Area; --10 CSR...
49 CFR 192.619 - Maximum allowable operating pressure: Steel or plastic pipelines.
2010-10-01
... operate a segment of steel or plastic pipeline at a pressure that exceeds a maximum allowable operating... design pressure of the weakest element in the segment, determined in accordance with subparts C and D of... K of this part, if any variable necessary to determine the design pressure under the design...
2010-10-01
... distribution systems. (a) No person may operate a low-pressure distribution system at a pressure high enough to...) No person may operate a low pressure distribution system at a pressure lower than the minimum... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum and minimum allowable operating...
49 CFR 192.621 - Maximum allowable operating pressure: High-pressure distribution systems.
2010-10-01
... STANDARDS Operations § 192.621 Maximum allowable operating pressure: High-pressure distribution systems. (a) No person may operate a segment of a high pressure distribution system at a pressure that exceeds the... segment of a distribution system otherwise designed to operate at over 60 p.s.i. (414 kPa) gage,...
MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.
Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel
2016-01-01
We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.
Iammarino, Marco; Di Taranto, Aurelia; Muscarella, Marilena
2012-02-01
Sulphiting agents are commonly used food additives. They are not allowed in fresh meat preparations. In this work, 2250 fresh meat samples were analysed to establish the maximum concentration of sulphites that can be considered as "natural" and therefore be admitted in fresh meat preparations. The analyses were carried out by an optimised Monier-Williams Method and the positive samples confirmed by ion chromatography. Sulphite concentrations higher than the screening method LOQ (10.0 mg · kg(-1)) were found in 100 samples. Concentrations higher than 76.6 mg · kg(-1), attributable to sulphiting agent addition, were registered in 40 samples. Concentrations lower than 41.3 mg · kg(-1) were registered in 60 samples. Taking into account the distribution of sulphite concentrations obtained, it is plausible to estimate a maximum allowable limit of 40.0 mg · kg(-1) (expressed as SO(2)). Below this value the samples can be considered as "compliant".
Lihui Guo
2015-01-01
Full Text Available With the increasing penetration of wind power, the randomness and volatility of wind power output would have a greater impact on safety and steady operation of power system. In allusion to the uncertainty of wind speed and load demand, this paper applied box set robust optimization theory in determining the maximum allowable installed capacity of wind farm, while constraints of node voltage and line capacity are considered. Optimized duality theory is used to simplify the model and convert uncertainty quantities in constraints into certainty quantities. Under the condition of multi wind farms, a bilevel optimization model to calculate penetration capacity is proposed. The result of IEEE 30-bus system shows that the robust optimization model proposed in the paper is correct and effective and indicates that the fluctuation range of wind speed and load and the importance degree of grid connection point of wind farm and load point have impact on the allowable capacity of wind farm.
Ingram, D D; Mussolino, M E
2010-06-01
The aim of this longitudinal study is to examine the relationship between weight loss from maximum body weight, body mass index (BMI), and mortality in a nationally representative sample of men and women. Longitudinal cohort study. In all, 6117 whites, blacks, and Mexican-Americans 50 years and over at baseline who survived at least 3 years of follow-up, from the Third National Health and Nutrition Examination Survey Linked Mortality Files (1988-1994 with passive mortality follow-up through 2000), were included. Measured body weight and self-reported maximum body weight obtained at baseline. Weight loss (maximum body weight minus baseline weight) was categorized as or=15%. Maximum BMI (reported maximum weight (kg)/measured baseline height (m)(2)) was categorized as healthy weight (18.5-24.9), overweight (25.0-29.9), and obese (>or=30.0). In all, 1602 deaths were identified. After adjusting for age, race, smoking, health status, and preexisting illness, overweight men with weight loss of 15% or more, overweight women with weight loss of 5-weight loss of 15% or more were at increased risk of death from all causes compared with those in the same BMI category who lost Weight loss of 5-Weight loss of 15% or more from maximum body weight is associated with increased risk of death from all causes among overweight men and among women regardless of maximum BMI.
The Maximum Free Magnetic Energy Allowed in a Solar Active Region
Moore, Ronald L.; Falconer, David A.
2009-01-01
Two whole-active-region magnetic quantities that can be measured from a line-of-sight magnetogram are (sup L) WL(sub SG), a gauge of the total free energy in an active region's magnetic field, and sup L(sub theta), a measure of the active region's total magnetic flux. From these two quantities measured from 1865 SOHO/MDI magnetograms that tracked 44 sunspot active regions across the 0.5 R(sub Sun) central disk, together with each active region's observed production of CMEs, X flares, and M flares, Falconer et al (2009, ApJ, submitted) found that (1) active regions have a maximum attainable free magnetic energy that increases with the magnetic size (sup L) (sub theta) of the active region, (2) in (Log (sup L)WL(sub SG), Log(sup L) theta) space, CME/flare-productive active regions are concentrated in a straight-line main sequence along which the free magnetic energy is near its upper limit, and (3) X and M flares are restricted to large active regions. Here, from (a) these results, (b) the observation that even the greatest X flares produce at most only subtle changes in active region magnetograms, and (c) measurements from MSFC vector magnetograms and from MDI line-of-sight magnetograms showing that practically all sunspot active regions have nearly the same area-averaged magnetic field strength: =- theta/A approximately equal to 300 G, where theta is the active region's total photospheric flux of field stronger than 100 G and A is the area of that flux, we infer that (1) the maximum allowed ratio of an active region's free magnetic energy to its potential-field energy is 1, and (2) any one CME/flare eruption releases no more than a small fraction (less than 10%) of the active region's free magnetic energy. This work was funded by NASA's Heliophysics Division and NSF's Division of Atmospheric Sciences.
Impact of Maximum Allowable Cost on CO2 Storage Capacity in Saline Formations.
Mathias, Simon A; Gluyas, Jon G; Goldthorpe, Ward H; Mackay, Eric J
2015-11-17
Injecting CO2 into deep saline formations represents an important component of many greenhouse-gas-reduction strategies for the future. A number of authors have posed concern over the thousands of injection wells likely to be needed. However, a more important criterion than the number of wells is whether the total cost of storing the CO2 is market-bearable. Previous studies have sought to determine the number of injection wells required to achieve a specified storage target. Here an alternative methodology is presented whereby we specify a maximum allowable cost (MAC) per ton of CO2 stored, a priori, and determine the corresponding potential operational storage capacity. The methodology takes advantage of an analytical solution for pressure build-up during CO2 injection into a cylindrical saline formation, accounting for two-phase flow, brine evaporation, and salt precipitation around the injection well. The methodology is applied to 375 saline formations from the U.K. Continental Shelf. Parameter uncertainty is propagated using Monte Carlo simulation with 10 000 realizations for each formation. The results show that MAC affects both the magnitude and spatial distribution of potential operational storage capacity on a national scale. Different storage prospects can appear more or less attractive depending on the MAC scenario considered. It is also shown that, under high well-injection rate scenarios with relatively low cost, there is adequate operational storage capacity for the equivalent of 40 years of U.K. CO2 emissions.
Immediate weight-bearing after osteosynthesis of proximal tibial fractures may be allowed
Haak, Karl Tobias; Palm, Henrik; Holck, Kim;
2012-01-01
Immediate weight-bearing following osteosynthesis of proximal tibial fractures is traditionally not allowed due to fear of articular fracture collapse. Anatomically shaped locking plates with sub-articular screws could improve stability and allow greater loading forces. The purpose of this study...... was to investigate if immediate weight-bearing can be allowed following locking plate osteosynthesis of proximal tibial fractures....
Estimation of Maximum Allowable PV Connection to LV Residential Power Networks
Demirok, Erhan; Sera, Dezso; Teodorescu, Remus
2011-01-01
transformer or using solar inverters with new grid support features. This study presents a methodology for the estimation of maximum PV hosting capacity including IEC 60076-7 based thermal model of distribution transformer. Certain part of a real distribution network of Braedstrup suburban area in Denmark...... is used in simulation as a case study model. Furthermore, varying solutions (utilizing thermally upgraded insulation paper in transformers, reactive power services from solar inverters, etc.) are implemented on the network under investigation to examine PV penetration level and finally key results learnt......Maximum photovoltaic (PV) hosting capacity of low voltage (LV) power networks is mainly restricted by either thermal limits of network components or grid voltage quality resulted from high penetration of distributed PV systems. This maximum hosting capacity may be lower than the available solar...
Reymbaut, A.; Gagnon, A.-M.; Bergeron, D.; Tremblay, A.-M. S.
2017-03-01
The computation of transport coefficients, even in linear response, is a major challenge for theoretical methods that rely on analytic continuation of correlation functions obtained numerically in Matsubara space. While maximum entropy methods can be used for certain correlation functions, this is not possible in general, important examples being the Seebeck, Hall, Nernst, and Reggi-Leduc coefficients. Indeed, positivity of the spectral weight on the positive real-frequency axis is not guaranteed in these cases. The spectral weight can even be complex in the presence of broken time-reversal symmetry. Various workarounds, such as the neglect of vertex corrections or the study of the infinite frequency or Kelvin limits, have been proposed. Here, we show that one can define auxiliary response functions that allow one to extract the desired real-frequency susceptibilities from maximum entropy methods in the most general multiorbital cases with no particular symmetry. As a benchmark case, we study the longitudinal thermoelectric response and corresponding Onsager coefficient in the single-band two-dimensional Hubbard model treated with dynamical mean-field theory and continuous-time quantum Monte Carlo. We thereby extend the maximum entropy analytic continuation with auxiliary functions (MaxEntAux method), developed for the study of the superconducting pairing dynamics of correlated materials, to transport coefficients.
Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems.
Schymanski, Stanislaus J; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin
2010-05-12
Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 x 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed.
Eun-Chan Kim
2016-06-01
Full Text Available The International Convention for the Control and Management of Ships’ Ballast Water and Sediments was adopted by IMO (International Maritime Organization on 13 February 2004. Fifty-seven ballast water management systems were granted basic approval of active substance by IMO, among which thirty-seven systems were granted final approval. This paper studies the maximum allowable dosage of active substances produced by ballast water management system using electrolysis which is an approved management system by IMO. The allowable dosage of active substances by electrolysis system is proposed by TRO (Total Residual Oxidant. Maximum allowable dosage of TRO is a very important factor in the ballast water management system when using the electrolysis methods, because ballast water management system is controlled with the TRO value, and the IMO approvals are given on the basis of the maximum allowable dosage of TRO for the treatment and discharge of ballast water. However, between various management systems approved TRO concentration of maximum allowable dosage showed large differences, ranging from 1 to 15 ppm, depending on the management systems. The discrepancies of maximum allowable dosage among the management systems may depend on whether a filter is used or not, the difference in the specifications of the electrolysis module, the kind of the tested organisms, the number of individual organisms, and the difference in the water quality, etc. Ship owners are responsible for satisfying the performance standard of the IMO convention in the ports of each country therefore need to carefully review whether the ballast water management system can satisfy the performance standard of the IMO convention or not.
Effects of loading on maximum vertical jumps: Selective effects of weight and inertia.
Leontijevic, Bojan; Pazin, Nemanja; Bozic, Predrag R; Kukolj, Milos; Ugarkovic, Dusan; Jaric, Slobodan
2012-04-01
A novel loading method was applied to explore selective effects of externally added weight (W), weight and inertia (W+I), and inertia (I) on maximum counter-movement jumps (CMJ) performed with arm swing. Externally applied extended rubber bands and/or loaded vest added W, W+I, and I corresponding to 10-40% of subjects' body mass. As expected, an increase in magnitude of all types of load was associated with an increase in ground reaction forces (GRF), as well as with a decrease in both the jumping performance and power output. However, of more importance could be that discernible differences among the effects of W, W+I, and I were recorded despite a relatively narrow loading range. In particular, an increase in W was associated with the minimal changes in movement kinematic pattern and smallest reduction of jumping performance, while also allowing for the highest power output. Conversely, W+I was associated with the highest ground reaction forces. Finally, the lowest maxima of GRF and power were associated with I. Although further research is apparently needed, the obtained finding could be of potential importance not only for understanding fundamental properties of the neuromuscular system, but also for optimization of loading in standard athletic training and rehabilitation procedures.
Approximating maximum weight cycle covers in directed graphs with weights zero and one
Bläser, Markus; Manthey, Bodo
2005-01-01
A cycle cover of a graph is a spanning subgraph each node of which is part of exactly one simple cycle. A $k$-cycle cover is a cycle cover where each cycle has length at least $k$. Given a complete directed graph with edge weights zero and one, Max-$k$-DCC(0, 1) is the problem of finding a k-cycle c
Dobranich, D.
1987-08-01
Calculations were performed to determine the mass of a space-based platform as a function of the maximum-allowed operating temperature of the electrical equipment within the platform payload. Two computer programs were used in conjunction to perform these calculations. The first program was used to determine the mass of the platform reactor, shield, and power conversion system. The second program was used to determine the mass of the main and secondary radiators of the platform. The main radiator removes the waste heat associated with the power conversion system and the secondary radiator removes the waste heat associated with the platform payload. These calculations were performed for both Brayton and Rankine cycle platforms with two different types of payload cooling systems: a pumped-loop system (a heat exchanger with a liquid coolant) and a refrigerator system. The results indicate that increases in the maximum-allowed payload temperature offer significant platform mass savings for both the Brayton and Rankine cycle platforms with either the pumped-loop or refrigerator payload cooling systems. Therefore, with respect to platform mass, the development of high temperature electrical equipment would be advantageous. 3 refs., 24 figs., 7 tabs.
Computational complexity of some maximum average weight problems with precedence constraints
Faigle, Ulrich; Kern, Walter
1994-01-01
Maximum average weight ideal problems in ordered sets arise from modeling variants of the investment problem and, in particular, learning problems in the context of concepts with tree-structured attributes in artificial intelligence. Similarly, trying to construct tests with high reliability leads t
Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong
2012-01-01
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Uniform estimate for maximum of randomly weighted sums with applications to insurance risk theory
WANG Dingcheng; SU Chun; ZENG Yong
2005-01-01
This paper obtains the uniform estimate for maximum of sums of independent and heavy-tailed random variables with nonnegative random weights, which can be arbitrarily dependent of each other. Then the applications to ruin probabilities in a discrete time risk model with dependent stochastic returns are considered.
Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong
2012-01-01
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Effects of loading on maximum vertical jumps: selective effects of weight and inertia
Leontijevic, Bojan; Pazin, Nemanja; Bozic, Predrag R.; Kukolj, Milos; Ugarkovic, Dusan; Jaric, Slobodan
2011-01-01
A novel loading method was applied to explore selective effects of externally added weight (W), weight and inertia (W+I), and inertia (I) on maximum counter-movement jumps (CMJ) performed with arm swing. Externally applied extended rubber bands and/or loaded vest added W, W+I, and I corresponding to 10–40% of subjects' body mass. As expected, an increase in magnitude of all types of load was associated with an increase in ground reaction forces (GRF), as well as with a decrease in both the ju...
Marco Bee
2012-01-01
This paper deals with the estimation of the lognormal-Pareto and the lognormal-Generalized Pareto mixture distributions. The log-likelihood function is discontinuous, so that Maximum Likelihood Estimation is not asymptotically optimal. For this reason, we develop an alternative method based on Probability Weighted Moments. We show that the standard version of the method can be applied to the first distribution, but not to the latter. Thus, in the lognormal- Generalized Pareto case, we work ou...
谢松光; 杨红生; 周毅; 张福绥
2004-01-01
Maximum rate of food consumption (Cmax) was determined for juvenile Sebastodes fuscescens (Houttuyn) at water temperature of 10, 15, 20 and 25℃. The relationships of Cmax to the body weight (W) at each temperature were described by a power equation: lnCmax = a + b lnW. Covariance analysis revealed significant interaction of the temperature and body weight. The relationship of adjusted Cmax to water temperature (T) was described by a quadratic equation: Cmax =-0.369 + 0.456T - 0.0117T2. The optimal feeding temperature calculated from this equation was 19.5℃. The coefficients of the multiple regression estimation relating Cmax to body weight (W) and water temperature (T) were given in the Table 2.
WMAXC: a weighted maximum clique method for identifying condition-specific sub-network.
Amgalan, Bayarbaatar; Lee, Hyunju
2014-01-01
Sub-networks can expose complex patterns in an entire bio-molecular network by extracting interactions that depend on temporal or condition-specific contexts. When genes interact with each other during cellular processes, they may form differential co-expression patterns with other genes across different cell states. The identification of condition-specific sub-networks is of great importance in investigating how a living cell adapts to environmental changes. In this work, we propose the weighted MAXimum clique (WMAXC) method to identify a condition-specific sub-network. WMAXC first proposes scoring functions that jointly measure condition-specific changes to both individual genes and gene-gene co-expressions. It then employs a weaker formula of a general maximum clique problem and relates the maximum scored clique of a weighted graph to the optimization of a quadratic objective function under sparsity constraints. We combine a continuous genetic algorithm and a projection procedure to obtain a single optimal sub-network that maximizes the objective function (scoring function) over the standard simplex (sparsity constraints). We applied the WMAXC method to both simulated data and real data sets of ovarian and prostate cancer. Compared with previous methods, WMAXC selected a large fraction of cancer-related genes, which were enriched in cancer-related pathways. The results demonstrated that our method efficiently captured a subset of genes relevant under the investigated condition.
Solving the Maximum Weighted Clique Problem Based on Parallel Biological Computing Model
Zhaocai Wang
2015-01-01
Full Text Available The maximum weighted clique (MWC problem, as a typical NP-complete problem, is difficult to be solved by the electronic computer algorithm. The aim of the problem is to seek a vertex clique with maximal weight sum in a given undirected graph. It is an extremely important problem in the field of optimal engineering scheme and control with numerous practical applications. From the point of view of practice, we give a parallel biological algorithm to solve the MWC problem. For the maximum weighted clique problem with m edges and n vertices, we use fixed length DNA strands to represent different vertices and edges, fully conduct biochemical reaction, and find the solution to the MVC problem in certain length range with O(n2 time complexity, comparing to the exponential time level by previous computer algorithms. We expand the applied scope of parallel biological computation and reduce computational complexity of practical engineering problems. Meanwhile, we provide a meaningful reference for solving other complex problems.
Postnatal weight loss in term infants: what is "normal" and do growth charts allow for it?
Wright, C.; Parkinson, K.
2004-01-01
Background: Although it is a well known phenomenon, limited normative data on neonatal weight loss and subsequent gain are available, making it hard to assess individual children with prolonged weight loss.
Feedback models allowing estimation of thresholds for self-promoting body weight gain
Christiansen, Edmund; Swann, Andrew; Sørensen, Thorkild I. A.
2008-01-01
. The difference between the two situations is typically an energy imbalance of about 1% over a long period of time. THEORY: Weight gain increases basal metabolic rate. Weight gain is often associated with a decrease in physical activity, although not to such an extent that it prevents an increase in total energy...... expenditure and energy intake. Dependent on the precise balance between these effects of weight gain, they may make the body weight unstable and tend to further promote weight gain. With the aim of identifying the thresholds beyond which such self-promoting weight gain may take place, we develop a simple...... cases do they take values that make weight gain self-promoting. RESULTS: We determine the quantitative conditions under which body weight gain becomes self-promoting. We find that these conditions can easily be met, and that they are so small that they are not observable with currently available...
Effect of background music on maximum acceptable weight of manual lifting tasks.
Yu, Ruifeng
2014-01-01
This study used the psychophysical approach to investigate the impact of tempo and volume of background music on the maximum acceptable weight of lift (MAWL), heart rate (HR) and rating of perceived exertion (RPE) of participants engaged in lifting. Ten male college students participated in this study. They lifted a box from the floor, walked 1-2 steps as required, placed the box on a table and walked back twice per minute. The results showed that the tempo of music had a significant effect on both MAWL and HR. Fast tempo background music resulted in higher MAWL and HR values than those resulting from slow tempo music. The effects of both the tempo and volume on the RPE were insignificant. The results of this study suggest fast tempo background music may be used in manual materials handling tasks to increase performance without increasing perceived exertion because of its ergogenic effect on human psychology and physiology.
Psychophysically determining the maximum acceptable weight of lift for polypropylene laminated bags.
Chen, Yi-Lang; Ho, Ting-Kuang
2016-12-07
The objective of this study was to psychophysically determine the maximum acceptable weight of lift (MAWL) for polypropylene (PP) laminated bags. Twelve men were requested to decide their MAWLs under various task combinations involving 3 lifting ranges, 3 lifting frequencies, and 2 hand conditions. The results revealed that the MAWL was significantly affected by the frequency and range variables (all plifts. The results of multiple stepwise regression revealed that certain anthropometric data (e.g., chest circumference, wrist circumference, and acromial height) accounted for the percentage of variance for the determined MAWLs, ranging from 56.2% to 83.4%. These data can be obtained simply and quickly, and are considered the superior predictors for MAWL determination when handling PP laminated bags.
Maximum Entropy Relief Feature Weighting%极大熵Relief特征加权
张翔; 邓赵红; 王士同; 蔡及时
2011-01-01
Relief特征加权的最新研究进展表明其可近似地表述为一个间距最大化优化问题.尽管该类算法广为应用,但仍然存在一些缺陷.为了提高Relief特征加权的适应性和鲁棒性,融合间距最大化和极大熵理论,并由此探讨了新的鲁棒的具有更好适应性的Relief特征加新方法.首先,构造了一个结合极大熵原理的间距最大化目标函数.对于该目标函数,运用优化理论得到一些重要的理论结果.在此基础上,对于两类数据、多类数据和在线数据,提出了一组鲁棒的Relief特征加权算法.利用UCI基准数据集和基因数据集进行了实验验证,结果表明提出的新Relief特征加权算法对噪音和例外点显示出了更好的适应性和鲁棒性.%A latest advance in Relief feature weighting techniques is that it can be approximately expressed as a margin maximization problem and therefore its distinctive properties can be investigated with the help of the optimization theory. Although Relief feature has been widely used, it lacks a mechanism to deal with outlier data and how to enhance the robustness and the adjustability of the algorithm in noisy environments is still not very obvious. In order to enhance Relief's adjustability and robustness, by integrating maximum entropy technique into Relief feature weighting techniques, the more robust and adaptive Relief feature weighting new algorithms are investigated. First, a new margin-based objective function integrating maximum entropy is proposed within the optimization framework, where two maximum entropy terms are adopted to control the feature weights and sample force coefficients respectively. Then by applying optimization theory, some of useful theoretical results are derived from the proposed objective function and then a set of robust Relief feature weighting algorithms are developed for two-class data, multi-class data and online data. As demonstrated by extensive experiments in UCI
Protein side-chain packing problem: a maximum edge-weight clique algorithmic approach.
Dukka Bahadur, K C; Tomita, Etsuji; Suzuki, Jun'ichi; Akutsu, Tatsuya
2005-02-01
"Protein Side-chain Packing" has an ever-increasing application in the field of bio-informatics, dating from the early methods of homology modeling to protein design and to the protein docking. However, this problem is computationally known to be NP-hard. In this regard, we have developed a novel approach to solve this problem using the notion of a maximum edge-weight clique. Our approach is based on efficient reduction of protein side-chain packing problem to a graph and then solving the reduced graph to find the maximum clique by applying an efficient clique finding algorithm developed by our co-authors. Since our approach is based on deterministic algorithms in contrast to the various existing algorithms based on heuristic approaches, our algorithm guarantees of finding an optimal solution. We have tested this approach to predict the side-chain conformations of a set of proteins and have compared the results with other existing methods. We have found that our results are favorably comparable or better than the results produced by the existing methods. As our test set contains a protein of 494 residues, we have obtained considerable improvement in terms of size of the proteins and in terms of the efficiency and the accuracy of prediction.
Monaco, James Peter; Madabhushi, Anant
2011-07-01
The ability of classification systems to adjust their performance (sensitivity/specificity) is essential for tasks in which certain errors are more significant than others. For example, mislabeling cancerous lesions as benign is typically more detrimental than mislabeling benign lesions as cancerous. Unfortunately, methods for modifying the performance of Markov random field (MRF) based classifiers are noticeably absent from the literature, and thus most such systems restrict their performance to a single, static operating point (a paired sensitivity/specificity). To address this deficiency we present weighted maximum posterior marginals (WMPM) estimation, an extension of maximum posterior marginals (MPM) estimation. Whereas the MPM cost function penalizes each error equally, the WMPM cost function allows misclassifications associated with certain classes to be weighted more heavily than others. This creates a preference for specific classes, and consequently a means for adjusting classifier performance. Realizing WMPM estimation (like MPM estimation) requires estimates of the posterior marginal distributions. The most prevalent means for estimating these--proposed by Marroquin--utilizes a Markov chain Monte Carlo (MCMC) method. Though Marroquin's method (M-MCMC) yields estimates that are sufficiently accurate for MPM estimation, they are inadequate for WMPM. To more accurately estimate the posterior marginals we present an equally simple, but more effective extension of the MCMC method (E-MCMC). Assuming an identical number of iterations, E-MCMC as compared to M-MCMC yields estimates with higher fidelity, thereby 1) allowing a far greater number and diversity of operating points and 2) improving overall classifier performance. To illustrate the utility of WMPM and compare the efficacies of M-MCMC and E-MCMC, we integrate them into our MRF-based classification system for detecting cancerous glands in (whole-mount or quarter) histological sections of the prostate.
Esfandiar, Habib; KoraYem, Moharam Habibnejad [Islamic Azad University, Tehran (Iran, Islamic Republic of)
2015-09-15
In this study, the researchers try to examine nonlinear dynamic analysis and determine Dynamic load carrying capacity (DLCC) in flexible manipulators. Manipulator modeling is based on Timoshenko beam theory (TBT) considering the effects of shear and rotational inertia. To get rid of the risk of shear locking, a new procedure is presented based on mixed finite element formulation. In the method proposed, shear deformation is free from the risk of shear locking and independent of the number of integration points along the element axis. Dynamic modeling of manipulators will be done by taking into account small and large deformation models and using extended Hamilton method. System motion equations are obtained by using nonlinear relationship between displacements-strain and 2nd PiolaKirchoff stress tensor. In addition, a comprehensive formulation will be developed to calculate DLCC of the flexible manipulators during the path determined considering the constraints end effector accuracy, maximum torque in motors and maximum stress in manipulators. Simulation studies are conducted to evaluate the efficiency of the method proposed taking two-link flexible and fixed base manipulators for linear and circular paths into consideration. Experimental results are also provided to validate the theoretical model. The findings represent the efficiency and appropriate performance of the method proposed.
Daniel, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Rudisill, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-07-17
As part of the Spent Nuclear Fuel (SNF) processing campaign, H-Canyon is planning to begin dissolving High Flux Isotope Reactor (HFIR) fuel in late FY17 or early FY18. Each HFIR fuel core contains inner and outer fuel elements which were fabricated from uranium oxide (U_{3}O_{8}) dispersed in a continuous Al phase using traditional powder metallurgy techniques. Fuels fabricated in this manner, like other SNF’s processed in H-Canyon, dissolve by the same general mechanisms with similar gas generation rates and the production of H_{2}. The HFIR fuel cores will be dissolved using a flowsheet developed by the Savannah River National Laboratory (SRNL) in either the 6.4D or 6.1D dissolver using a unique insert. Multiple cores will be charged to the same dissolver solution maximizing the concentration of dissolved Al. The recovered U will be down-blended into low-enriched U for subsequent use as commercial reactor fuel. During the development of the HFIR fuel dissolution flowsheet, the cycle time for the initial core was estimated at 28 to 40 h. Once the cycle is complete, H-Canyon personnel will open the dissolver and probe the HFIR insert wells to determine the height of any fuel fragments which did not dissolve. Before the next core can be charged to the dissolver, an analysis of the potential for H_{2} gas generation must show that the combined surface area of the fuel fragments and the subsequent core will not generate H_{2} concentrations in the dissolver offgas which exceeds 60% of the lower flammability limit (LFL) of H_{2} at 200 °C. The objective of this study is to identify the maximum fuel fragment height as a function of the Al concentration in the dissolving solution which will provide criteria for charging successive HFIR cores to an H-Canyon dissolver.
AlPOs Synthetic Factor Analysis Based on Maximum Weight and Minimum Redundancy Feature Selection
Yinghua Lv
2013-11-01
Full Text Available The relationship between synthetic factors and the resulting structures is critical for rational synthesis of zeolites and related microporous materials. In this paper, we develop a new feature selection method for synthetic factor analysis of (6,12-ring-containing microporous aluminophosphates (AlPOs. The proposed method is based on a maximum weight and minimum redundancy criterion. With the proposed method, we can select the feature subset in which the features are most relevant to the synthetic structure while the redundancy among these selected features is minimal. Based on the database of AlPO synthesis, we use (6,12-ring-containing AlPOs as the target class and incorporate 21 synthetic factors including gel composition, solvent and organic template to predict the formation of (6,12-ring-containing microporous aluminophosphates (AlPOs. From these 21 features, 12 selected features are deemed as the optimized features to distinguish (6,12-ring-containing AlPOs from other AlPOs without such rings. The prediction model achieves a classification accuracy rate of 91.12% using the optimal feature subset. Comprehensive experiments demonstrate the effectiveness of the proposed algorithm, and deep analysis is given for the synthetic factors selected by the proposed method.
Zhu, Ke; 10.1214/11-AOS895
2012-01-01
This paper investigates the asymptotic theory of the quasi-maximum exponential likelihood estimators (QMELE) for ARMA--GARCH models. Under only a fractional moment condition, the strong consistency and the asymptotic normality of the global self-weighted QMELE are obtained. Based on this self-weighted QMELE, the local QMELE is showed to be asymptotically normal for the ARMA model with GARCH (finite variance) and IGARCH errors. A formal comparison of two estimators is given for some cases. A simulation study is carried out to assess the performance of these estimators, and a real example on the world crude oil price is given.
Arkharov, A. M.; Dontsova, E. S.; Lavrov, N. A.; Romanovskii, V. R.
2014-04-01
Maximum allowable (ultimate) currents stably passing through an YBa2Cu3O7 superconducting current-carrying element are determined as a function of a silver (or copper) coating thickness, external magnetic field induction, and cooling conditions. It is found that if a magnetic system based on yttrium ceramics is cooled by a cryogenic coolant, currents causing instabilities (instability onset currents) are almost independent of the coating thickness. If, however, liquid helium is used as a cooling agent, the ultimate current monotonically grows with the thickness of the stabilizing copper coating. It is shown that depending on cooling conditions, the stable values of the current and electric field strength preceding the occurrence of instability may be both higher and lower than the a priori chosen critical parameters of the superconductor. These features should be taken into account in selecting the stable value of the operating current of YBa2Cu3O7 superconducting windings.
Isacco, L; Thivel, D; Duclos, M; Aucouturier, J; Boisseau, N
2014-06-01
Fat mass localization affects lipid metabolism differently at rest and during exercise in overweight and normal-weight subjects. The aim of this study was to investigate the impact of a low vs high ratio of abdominal to lower-body fat mass (index of adipose tissue distribution) on the exercise intensity (Lipox(max)) that elicits the maximum lipid oxidation rate in normal-weight women. Twenty-one normal-weight women (22.0 ± 0.6 years, 22.3 ± 0.1 kg.m(-2)) were separated into two groups of either a low or high abdominal to lower-body fat mass ratio [L-A/LB (n = 11) or H-A/LB (n = 10), respectively]. Lipox(max) and maximum lipid oxidation rate (MLOR) were determined during a submaximum incremental exercise test. Abdominal and lower-body fat mass were determined from DXA scans. The two groups did not differ in aerobic fitness, total fat mass, or total and localized fat-free mass. Lipox(max) and MLOR were significantly lower in H-A/LB vs L-A/LB women (43 ± 3% VO(2max) vs 54 ± 4% VO(2max), and 4.8 ± 0.6 mg min(-1)kg FFM(-1)vs 8.4 ± 0.9 mg min(-1)kg FFM(-1), respectively; P normal-weight women, a predominantly abdominal fat mass distribution compared with a predominantly peripheral fat mass distribution is associated with a lower capacity to maximize lipid oxidation during exercise, as evidenced by their lower Lipox(max) and MLOR. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Ali, Md. Ayub; Ohtsuki, Fumio
2000-05-01
An attempt was made to estimate the maximum increment age (MIA) in height and weight of Japanese boys and girls during the birth years 1893-1990 through the published data of the Ministry of Education, Science, Sports and Culture in Japan. In cases where the same maximum annual increment occurred in two or three successive age classes in a birth year cohort, a new formula (see Eq. 2) was developed to estimate the MIA. The existing formula for estimating MIA was modified to remove the mathematical deficiency (Eq. 1). Estimated MIA shows an overall declining trend, except in birth year cohorts 1934-1951. The effect of World War II on MIA was investigated by a dummy variable regression model. On average, during the birth years 1934-1951, MIA in height decelerated by 1.35 years in boys and 0.54 year in girls, while MIA in weight decelerated by 0.95 year in boys and 0.78 year in girls. Am. J. Hum. Biol. 12:363-370, 2000. Copyright 2000 Wiley-Liss, Inc.
Wu, Swei-Pi; Ho, Cheng-Pin; Yen, Chin-Li
2011-01-01
A wok with a straight handle is one of the most common cooking utensils in the Asian kitchen. This common cooking instrument has seldom been examined by ergonomists. This research used a two-factor randomized complete block design to investigate the effects of wok size (with three diameters - 36 cm, 39 cm and 42 cm) and handle angle (25°, 10°, -5°, -20°, and -35°) on the task of flipping. The measurement criteria included the maximum acceptable weight of wok flipping (MAWF), the subjective rating and the subjective ranking. Twelve experienced males volunteered to take part in this study. The results showed that both the wok size and handle angle had a significant effect on the MAWF, the subjective rating and the subjective ranking. Additionally, there is a size-weight illusion associated with flipping tasks. In general, a small wok (36 cm diameter) with an ergonomically bent handle (-20° ± 15°) is the optimal design, for male cooks, for the purposes of flipping.
Kinosada, Yasutomi; Okuda, Yasuyuki (Mie Univ., Tsu (Japan). School of Medicine); Ono, Mototsugu (and others)
1993-02-01
We developed a new noninvasive technique to visualize the anatomical structure of the nerve fiber system in vivo, and named this technique magnetic resonance (MR) tractography and the acquired image an MR tractogram. MR tractography has two steps. One is to obtain diffusion-weighted images sensitized along axes appropriate for depicting the intended nerve fibers with anisotropic water diffusion MR imaging. The other is to extract the anatomical structure of the nerve fiber system from a series of diffusion-weighted images by the maximum intensity projection method. To examine the clinical usefulness of the proposed technique, many contiguous, thin (3 mm) coronal two-dimensional sections of the brain were acquired sequentially in normal volunteers and selected patients with paralyses, on a 1.5 Tesla MR system (Signa, GE) with an ECG-gated Stejskal-Tanner pulse sequence. The structure of the nerve fiber system of normal volunteers was almost the same as the anatomy. The tractograms of patients with paralyses clearly showed the degeneration of nerve fibers and were correlated with clinical symptoms. MR tractography showed great promise for the study of neuroanatomy and neuroradiology. (author).
Mousavi, Sayyed R; Khodadadi, Ilnaz; Falsafain, Hossein; Nadimi, Reza; Ghadiri, Nasser
2014-06-07
Human haplotypes include essential information about SNPs, which in turn provide valuable information for such studies as finding relationships between some diseases and their potential genetic causes, e.g., for Genome Wide Association Studies. Due to expensiveness of directly determining haplotypes and recent progress in high throughput sequencing, there has been an increasing motivation for haplotype assembly, which is the problem of finding a pair of haplotypes from a set of aligned fragments. Although the problem has been extensively studied and a number of algorithms have already been proposed for the problem, more accurate methods are still beneficial because of high importance of the haplotypes information. In this paper, first, we develop a probabilistic model, that incorporates the Minor Allele Frequency (MAF) of SNP sites, which is missed in the existing maximum likelihood models. Then, we show that the probabilistic model will reduce to the Minimum Error Correction (MEC) model when the information of MAF is omitted and some approximations are made. This result provides a novel theoretical support for the MEC, despite some criticisms against it in the recent literature. Next, under the same approximations, we simplify the model to an extension of the MEC in which the information of MAF is used. Finally, we extend the haplotype assembly algorithm HapSAT by developing a weighted Max-SAT formulation for the simplified model, which is evaluated empirically with positive results.
Werner, Stefanie [Umweltbundesamt, Dessau-Rosslau (Germany). Fachgebiet II 2.3
2011-05-15
When offshore wind farms are constructed, every single pile is hammered into the sediment by a hydraulic hammer. Noise levels at Horns Reef wind farm were in the range of 235 dB. The noise may cause damage to the auditory system of marine mammals. The Federal Environmental Office therefore recommends the definition of maximum permissible noise levels. Further, care should be taken that no marine mammals are found in the immediate vicinity of the construction site. (AKB)
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Tran Duy, A.; Schrama, J.W.; Dam, van A.A.; Verreth, J.A.J.
2008-01-01
Feed intake and satiation in fish are regulated by a number of factors, of which dissolved oxygen concentration (DO) is important. Since fish take up oxygen through the limited gill surface area, all processes that need energy, including food processing, depend on their maximum oxygen uptake capacit
Konstandinos G. Raptis
2012-01-01
Full Text Available Purpose of this study is the consideration of loading and contact problems encountered at rotating machine elements and especially at toothed gears. The later are some of the most commonly used mechanical components for rotary motion and power transmission. This fact proves the necessity for improved reliability and enhanced service life, which require precise and clear knowledge of the stress field at gear tooth. This study investigates the maximum allowable stresses occurring during spur gear tooth meshing computed using Niemannâs formulas at Highest Point of Single Tooth Contact (HPSTC. Gear material, module, power rating and number of teeth are considered as variable parameters. Furthermore, the maximum allowable stresses for maximum power transmission conditions are considered keeping the other parameters constant. After the application of Niemannâs formulas to both loading cases, the derived results are compared to the respective estimations of Finite Element Method (FEM using ANSYS software. Comparison of the results derived from Niemannâs formulas and FEM show that deviations between the two methods are kept at low level for both loading cases independently of the applied power (either random or maximum and the respective tangential load.
Sreekar Kumar Reddy.R
2014-09-01
Full Text Available Background: Patellofemoral pain syndrome is a very common disorder. 90% of the general population has some degree of pathologic changes of the patellofemoral joint. Knowledge regarding the cause and prevention of patellofemoral pain syndrome is essential. Therefore the purpose of this study is intended to know whether different foot positions alter Vastus Medialis Oblique and Vastus Lateralis that leads to dysfunctions of knee joint. Method: 30 subjects are included in study and investigated foot in different foot positions are in neutral, pronated and supinated foot positions and performed maximum voluntary isometric contractions are recorded with electromyography. Results: EMG amplitudes (microvolts of VL and VMO at three different weight bearing positions of foot during maximum voluntary contraction analysis by using one-way Analysis of Variance. Mean amplitudes of foot positions in pronation shown significant difference while comparing with neutral and supination. Conclusion: The VMO and VL activity shows significant difference in the pronated foot weight bearing position compared to the neutral and supinated foot. Performing the maximum voluntary isometric contractions of VMO and VL with pronated foot elicited significantly higher EMG activity compared to Neutral or supinated weight bearing positions of foot. The results of this study also suggested that for patellofemoral pain which is caused by pronated foot can be treat with by using the soft foot orthoses
Gruendling, Till; Guilhaus, Michael; Barner-Kowollik, Christopher
2008-09-15
We report on the successful application of size exclusion chromatography (SEC) combined with electrospray ionization mass spectrometry (ESI-MS) and refractive index (RI) detection for the determination of accurate molecular weight distributions of synthetic polymers, corrected for chromatographic band broadening. The presented method makes use of the ability of ESI-MS to accurately depict the peak profiles and retention volumes of individual oligomers eluting from the SEC column, whereas quantitative information on the absolute concentration of oligomers is obtained from the RI-detector only. A sophisticated computational algorithm based on the maximum entropy principle is used to process the data gained by both detectors, yielding an accurate molecular weight distribution, corrected for chromatographic band broadening. Poly(methyl methacrylate) standards with molecular weights up to 10 kDa serve as model compounds. Molecular weight distributions (MWDs) obtained by the maximum entropy procedure are compared to MWDs, which were calculated by a conventional calibration of the SEC-retention time axis with peak retention data obtained from the mass spectrometer. Comparison showed that for the employed chromatographic system, distributions below 7 kDa were only weakly influenced by chromatographic band broadening. However, the maximum entropy algorithm could successfully correct the MWD of a 10 kDa standard for band broadening effects. Molecular weight averages were between 5 and 14% lower than the manufacturer stated data obtained by classical means of calibration. The presented method demonstrates a consistent approach for analyzing data obtained by coupling mass spectrometric detectors and concentration sensitive detectors to polymer liquid chromatography.
LIN; Kuang-Jang; LIN; Chii-Ruey
2010-01-01
The Photovoltaic Array has a best optimal operating point where the array operating can obtain the maximum power.However, the optimal operating point can be compromised by the strength of solar radiation,angle,and by the change of environment and load.Due to the constant changes in these conditions,it has become very difficult to locate the optimal operating point by following a mathematical model.Therefore,this study will focus mostly on the application of Fuzzy Logic Control theory and Three-point Weight Comparison Method in effort to locate the optimal operating point of solar panel and achieve maximum efficiency in power generation. The Three-point Weight Comparison Method is the comparison between the characteristic curves of the voltage of photovoltaic array and output power;it is a rather simple way to track the maximum power.The Fuzzy Logic Control,on the other hand,can be used to solve problems that cannot be effectively dealt with by calculation rules,such as concepts,contemplation, deductive reasoning,and identification.Therefore,this paper uses these two kinds of methods to make simulation successively. The simulation results show that,the Three-point Comparison Method is more effective under the environment with more frequent change of solar radiation;however,the Fuzzy Logic Control has better tacking efficiency under the environment with violent change of solar radiation.
基于最大似然估计的加权质心定位算法%Weighted Centroid Localization Algorithm Based on Maximum Likelihood Estimation
卢先领; 夏文瑞
2016-01-01
In solving the problem of localizing nodes in a wireless sensor network,we propose a weighted centroid localization algorithm based on maximum likelihood estimation,with the specific goal of solving the problems of big received signal strength indication (RSSI)ranging error and low accuracy of the centroid localization algorithm.Firstly,the maximum likelihood estimation between the estimated distance and the actual distance is calculated as weights.Then,a parameter k is introduced to optimize the weights between the anchor nodes and the unknown nodes in the weight model.Finally,the locations of the unknown nodes are calculated and modified by using the proposed algorithm.The simulation results show that the weighted centroid algorithm based on the maximum likelihood estimation has the features of high localization accuracy and low cost,and has better performance compared with the inverse distance-based algorithm and the inverse RSSI-based algo-rithm.Hence,the proposed algorithm is more suitable for the indoor localization of large areas.%为解决无线传感器网络中节点自身定位问题，针对接收信号强度指示（received signal strength indication，RSSI）测距误差大和质心定位算法精度低的问题，提出一种基于最大似然估计的加权质心定位算法。首先通过计算将估计距离与实际距离之间的最大似然估计值作为权值，然后在权值模型中，引进一个参数k优化未知节点周围锚节点分布，最后计算出未知节点的位置并加以修正。仿真结果表明，基于最大似然估计的加权质心算法具有定位精度高和成本低的特点，优于基于距离倒数的质心加权和基于RSSI倒数的质心加权算法，适用于大面积的室内定位。
Flohr, J R; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D; Dritz, S S
2016-10-01
A total of 1,092 finishing pigs (initially 36.3 kg) were used in a 117-d study to evaluate the impact of initial floor space allowance and removal strategy on the growth of pigs up to 140 kg BW. There were 4 experimental treatments with 14 pens per treatment. The first treatment provided 0.91 m per pig (15 pigs/pen). The other 3 treatments initially provided 0.65 m per pig (21 pigs/pen) with 3 different removal strategies. The second treatment (2:2:2) removed the 2 heaviest pigs from pens on d 64, 76, and 95 when floor space allowance was predicted to be limiting. Treatment 3 (2:4) removed the 2 heaviest pigs on d 76 and the 4 heaviest pigs on d 105. Treatment 4 (6) removed the heaviest 6 pigs on d 105. All pigs remaining in pens after removals were fed to d 117. Overall (d 0 to 117), pigs initially provided 0.91 m of floor space had increased ( strategy, but ADG was not different compared with pigs on the 2:2:2 removal strategy. Total BW gain per pen was greater ( strategies; however, feed usage per pig was greater ( strategies. Feed usage, on a pig or pen basis, was less ( strategy compared to pigs on the 2:4 or the 6 removal strategy. Income over feed and facility cost (IOFFC) was less ( strategies. Also, IOFFC was less ( strategies. In conclusion, increasing the floor space allowance or the time points at which pigs are removed from the pen improved the growth of pigs remaining in the pen; however, IOFFC may be reduced because fewer pigs are marketed from each pen (pigs stocked at 0.91 m throughout the study) or from reducing total weight produced (2:2:2 removal strategy).
Mccleary, Barry V
2014-01-01
AOAC Official Methods 2009.01 and 2011.25 have been modified to allow removal of resistant maltodextrins produced on hydrolysis of various starches by the combination of pancreatic alpha-amylase and amyloglucosidase (AMG) used in these assay procedures. The major resistant maltodextrin, 6(3),6(5)-di-alpha-D-glucosyl maltopentaose, is highly resistant to hydrolysis by microbial alpha-glucosidases, isoamylase, pullulanase, pancreatic, bacterial and fungal alpha-amylase and AMG. However, this oligosaccharide is hydrolyzed by the mucosal alpha-glucosidase complex of the pig small intestine (which is similar to the human small intestine), and thus must be removed in the analytical procedure. Hydrolysis of these oligosaccharides has been by incubation with a high concentration of a purified AMG at 60 degrees C. This incubation results in no hydrolysis or loss of other resistant oligosaccharides such as FOS, GOS, XOS, resistant maltodextrins (e.g., Fibersol 2) or polydextrose. The effect of this additional incubation with AMG on the measured level of low molecular weight soluble dietary fiber (SDFS) and of total dietary fiber in a broad range of samples is reported. Results from this study demonstrate that the proposed modification can be used with confidence in the measurement of dietary fiber.
Albuquerque, Fabio; Beier, Paul
2015-01-01
Here we report that prioritizing sites in order of rarity-weighted richness (RWR) is a simple, reliable way to identify sites that represent all species in the fewest number of sites (minimum set problem) or to identify sites that represent the largest number of species within a given number of sites (maximum coverage problem). We compared the number of species represented in sites prioritized by RWR to numbers of species represented in sites prioritized by the Zonation software package for 11 datasets in which the size of individual planning units (sites) ranged from algorithms remain superior for conservation prioritizations that consider compactness and multiple near-optimal solutions in addition to species representation. But because RWR can be implemented easily and quickly in R or a spreadsheet, it is an attractive alternative to integer programming or heuristic algorithms in some conservation prioritization contexts.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Kuijer, P P F M; van Oostrom, S H; Duijzer, K; van Dieën, J H
2012-01-01
It is unclear whether the maximum acceptable weight of lift (MAWL), a common psychophysical method, reflects joint kinetics when different lifting techniques are employed. In a within-participants study (n = 12), participants performed three lifting techniques--free style, stoop and squat lifting from knee to waist level--using the same dynamic functional capacity evaluation lifting test to assess MAWL and to calculate low back and knee kinetics. We assessed which knee and back kinetic parameters increased with the load mass lifted, and whether the magnitudes of the kinetic parameters were consistent across techniques when lifting MAWL. MAWL was significantly different between techniques (p = 0.03). The peak lumbosacral extension moment met both criteria: it had the highest association with the load masses lifted (r > 0.9) and was most consistent between the three techniques when lifting MAWL (ICC = 0.87). In conclusion, MAWL reflects the lumbosacral extension moment across free style, stoop and squat lifting in healthy young males, but the relation between the load mass lifted and lumbosacral extension moment is different between techniques. Tests of maximum acceptable weight of lift (MAWL) from knee to waist height are used to assess work capacity of individuals with low-back disorders. This article shows that the MAWL reflects the lumbosacral extension moment across free style, stoop and squat lifting in healthy young males, but the relation between the load mass lifted and lumbosacral extension moment is different between techniques. This suggests that standardisation of lifting technique used in tests of the MAWL would be indicated if the aim is to assess the capacity of the low back.
Human Resources Division
2001-01-01
HR Division wishes to clarify to members of the personnel that the allowance for a dependent child continues to be paid during all training courses ('stages'), apprenticeships, 'contrats de qualification', sandwich courses or other courses of similar nature. Any payment received for these training courses, including apprenticeships, is however deducted from the amount reimbursable as school fees. HR Division would also like to draw the attention of members of the personnel to the fact that any contract of employment will lead to the suppression of the child allowance and of the right to reimbursement of school fees.
Yaghini, N.; Iedema, P.
2014-01-01
A population balance model for the prediction of molecular weight distribution (MWD) in a continuous stirred tank reactor (CSTR) has been developed accounting for multiradicals and gel formation in the framework of Galerkin-FEM. In the absence of recombination, gel does not form, but accounting for
Nakajo, Masatoyo [Nanpuh Hospital, Department of Radiology, Kagoshima (Japan); Kagoshima University, Department of Radiology, Graduate School of Medical and Dental Sciences, Kagoshima (Japan); Kajiya, Yoriko; Tani, Atsushi; Ueno, Masako [Nanpuh Hospital, Department of Radiology, Kagoshima (Japan); Kaneko, Tomoyo; Kaneko, Youichi [Kaneko Clinic, Department of Breast Surgery, Kagoshima (Japan); Takasaki, Takashi [Department of Pathology, Clinical Pathology Laboratory, Kagoshima (Japan); Koriyama, Chihaya [Kagoshima University, Department of Epidemiology and Preventive Medicine, Graduate School of Medical and Dental Sciences, Kagoshima (Japan); Nakajo, Masayuki [Kagoshima University, Department of Radiology, Graduate School of Medical and Dental Sciences, Kagoshima (Japan)
2010-11-15
To correlate both primary lesion {sup 18}F-fluorodeoxyglucose (FDG) maximum standardized uptake value (SUVmax) and diffusion-weighted imaging (DWI) apparent diffusion coefficient (ADC) with clinicopathological prognostic factors and compare the prognostic value of these indexes in breast cancer. The study population consisted of 44 patients with 44 breast cancers visible on both preoperative FDG PET/CT and DWI images. The breast cancers included 9 ductal carcinoma in situ (DCIS) and 35 invasive ductal carcinomas (IDC). The relationships between both SUVmax and ADC and clinicopathological prognostic factors were evaluated by univariate and multivariate regression analysis and the degree of correlation was determined by Spearman's rank test. The patients were divided into a better prognosis group (n = 24) and a worse prognosis group (n = 20) based upon invasiveness (DCIS or IDC) and upon their prognostic group (good, moderate or poor) determined from the modified Nottingham prognostic index. Their prognostic values were examined by receiver operating characteristic analysis. Both SUVmax and ADC were significantly associated (p<0.05) with histological grade (independently), nodal status and vascular invasion. Significant associations were also noted between SUVmax and tumour size (independently), oestrogen receptor status and human epidermal growth factor receptor-2 status, and between ADC and invasiveness. SUVmax and ADC were negatively correlated ({rho}=-0.486, p = 0.001) and positively and negatively associated with increasing of histological grade, respectively. The threshold values for predicting a worse prognosis were {>=}4.2 for SUVmax (with a sensitivity, specificity and accuracy of 80%, 75% and 77%, respectively) and {<=}0.98 for ADC (with a sensitivity, specificity and accuracy of 90%, 67% and 77%, respectively). SUVmax and ADC correlated with several of pathological prognostic factors and both indexes may have the same potential for predicting the
Yaghini, N.; Iedema, P.D.
2015-01-01
Modeling of the mol. wt. distribution (MWD) of low-d. Polyethylene (ldPE) has been carried out for a tubular reactor under realistic non-isothermal conditions and for a series of CSTR's. The model allows for the existence of multiradicals and the occurrence of gelation. The deterministic model is
Yaghini, N.; Iedema, P.D.
2015-01-01
Modeling of the mol. wt. distribution (MWD) under circumstances of low-d. polyethylene (ldPE) has been carried out for a tubular reactor under realistic non-isothermal conditions and for a series of CSTR's (Yaghini and Iedema, in press). The model allows for the existence of multiradicals and the
Abadi, Ali Salehi Sahl; Mazlomi, Adel; Saraji, Gebraeil Nasl; Zeraati, Hojjat; Hadian, Mohammad Reza; Jafari, Amir Homayoun
2015-10-01
In spite of the widespread use of automation in industry, manual material handling (MMH) is still performed in many occupational settings. The emphasis on ergonomics in MMH tasks is due to the potential risks of workplace accidents and injuries. This study aimed to assess the effect of box size, frequency of lift, and height of lift on maximum acceptable weight of lift (MAWL) on the heart rates of male university students in Iran. This experimental study was conducted in 2015 with 15 male students recruited from Tehran University of Medical Sciences. Each participant performed 18 different lifting tasks that involved three lifting frequencies (1lift/min, 4.3 lifts/min and 6.67 lifts/min), three lifting heights (floor to knuckle, knuckle to shoulder, and shoulder to arm reach), and two box sizes. Each set of experiments was conducted during the 20 min work period using the free-style lifting technique. The working heart rates (WHR) were recorded for the entire duration. In this study, we used SPSS version 18 software and descriptive statistical methods, analysis of variance (ANOVA), and the t-test for data analysis. The results of the ANOVA showed that there was a significant difference between the mean of MAWL in terms of frequencies of lifts (p = 0.02). Tukey's post hoc test indicated that there was a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0. 01). There was a significant difference between the mean heart rates in terms of frequencies of lifts (p = 0.006), and Tukey's post hoc test indicated a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0.004). But, there was no significant difference between the mean of MAWL and the mean heart rate in terms of lifting heights (p > 0.05). The results of the t-test showed that there was a significant difference between the mean of MAWL and the mean heart rate in terms of the sizes of the two boxes (p = 0.000). Based on the results of
Garrick Dorian J
2010-06-01
Full Text Available Abstract Background The distribution of residual effects in linear mixed models in animal breeding applications is typically assumed normal, which makes inferences vulnerable to outlier observations. In order to mute the impact of outliers, one option is to fit models with residuals having a heavy-tailed distribution. Here, a Student's-t model was considered for the distribution of the residuals with the degrees of freedom treated as unknown. Bayesian inference was used to investigate a bivariate Student's-t (BSt model using Markov chain Monte Carlo methods in a simulation study and analysing field data for gestation length and birth weight permitted to study the practical implications of fitting heavy-tailed distributions for residuals in linear mixed models. Methods In the simulation study, bivariate residuals were generated using Student's-t distribution with 4 or 12 degrees of freedom, or a normal distribution. Sire models with bivariate Student's-t or normal residuals were fitted to each simulated dataset using a hierarchical Bayesian approach. For the field data, consisting of gestation length and birth weight records on 7,883 Italian Piemontese cattle, a sire-maternal grandsire model including fixed effects of sex-age of dam and uncorrelated random herd-year-season effects were fitted using a hierarchical Bayesian approach. Residuals were defined to follow bivariate normal or Student's-t distributions with unknown degrees of freedom. Results Posterior mean estimates of degrees of freedom parameters seemed to be accurate and unbiased in the simulation study. Estimates of sire and herd variances were similar, if not identical, across fitted models. In the field data, there was strong support based on predictive log-likelihood values for the Student's-t error model. Most of the posterior density for degrees of freedom was below 4. Posterior means of direct and maternal heritabilities for birth weight were smaller in the Student's-t model
Kozuki, Naoko; Katz, Joanne; Clermont, Adrienne; Walker, Neff
2017-09-13
Background: The Lives Saved Tool (LiST) is a software model that estimates the health impact of scaling up interventions on maternal and child health. One of the outputs of the model is an estimation of births by fetal size [appropriate-for-gestational-age (AGA) or small-for-gestational-age (SGA)] and by length of gestation (term or preterm), both of which influence birth weight. LiST uses prevalence estimates of births in these categories rather than of birth weight categories, because the causes and health consequences differ between SGA and preterm birth. The World Health Assembly nutrition plan, however, has set the prevalence of low birth weight (LBW) as a key indicator, with a specific goal of a 30% reduction in LBW prevalence by 2025.Objective: The objective of the study is to develop an algorithm that will allow LiST users to estimate changes in prevalence of LBW on the basis of changes in coverage of interventions and the resulting impact on prevalence estimates of SGA and preterm births.Methods: The study used 13 prospective cohort data sets from low- and middle-income countries (LMICs; 4 from sub-Saharan Africa, 5 from Asia, and 4 from Latin America), with reliable measures of gestational age and birth weight. By calculating the proportion of LBW births among SGA and preterm births in each data set and meta-analyzing those estimates, we calculated region-specific pooled rates of LBW among SGA and preterm births.Results: In Africa, 0.4% of term-AGA, 36.7% of term-SGA, 49.3% of preterm-AGA, and 100.0% of preterm-SGA births were LBW. In Asia, 1.0% of term-SGA, 47.0% of term-SGA, 36.7% of preterm-AGA, and 100.0% of preterm-SGA births were LBW. In Latin America, 0.4% of term-AGA, 34.4% of term-SGA, 32.3% of preterm-AGA, and 100.0% of preterm-SGA births were LBW.Conclusions: The simple conversion factor proposed here allows for the estimation of LBW within LiST for most LMICs. This will allow LiST users to approximate the impact of their health programs on LBW
Method to Determine Maximum Allowable Sinterable Silver Interconnect Size
Wereszczak, A. A.; Modugno, M. C.; Waters, S. B.; DeVoto, D. J.; Paret, P. P.
2016-05-01
The use of sintered-silver for large-area interconnection is attractive for some large-area bonding applications in power electronics such as the bonding of metal-clad, electrically-insulating substrates to heat sinks. Arrays of different pad sizes and pad shapes have been considered for such large area bonding; however, rather than arbitrarily choosing their size, it is desirable to use the largest size possible where the onset of interconnect delamination does not occur. If that is achieved, then sintered-silver's high thermal and electrical conductivities can be fully taken advantage of. Toward achieving this, a simple and inexpensive proof test is described to identify the largest achievable interconnect size with sinterable silver. The method's objective is to purposely initiate failure or delamination. Copper and invar (a ferrous-nickel alloy whose coefficient of thermal expansion (CTE) is similar to that of silicon or silicon carbide) disks were used in this study and sinterable silver was used to bond them. As a consequence of the method's execution, delamination occurred in some samples during cooling from the 250 degrees C sintering temperature to room temperature and bonding temperature and from thermal cycling in others. These occurrences and their interpretations highlight the method's utility, and the herein described results are used to speculate how sintered-silver bonding will work with other material combinations.
ASSESSMENT OF MAXIMUM ALLOWABLE FISCAL BURDEN ON UKRAINE NATIONAL ECONOMY
M. Aleksandrova
2014-03-01
Full Text Available The article was reviewed reproductive aspect of the relationship between fiscal burden and a penchant for economic development under certain assumptions about the relationship of these variables. The analysis was based on a fairly simple dynamic model in which the share of income that goes to the development of production, relying constant. Computed optimal fiscal burden for the economic development of the country is 19.29% of GDP. The estimation and comparison of the calculations of the tax burden followed its dynamics, by comparative assessment with those of developed countries. The prospects of the proposed approach for predicting the development of national economy were analyzed.
Rubin, Stephen P.; Reisenbichler, Reginald R.; Slatton, Stacey L.; Rubin, Stephen P.; Reisenbichler, Reginald R.; Wetzel, Lisa A.; Hayes, Michael C.
2012-01-01
The accuracy of a model that predicts time between fertilization and maximum alevin wet weight (MAWW) from incubation temperature was tested for steelhead Oncorhynchus mykiss from Dworshak National Fish Hatchery on the Clearwater River, Idaho. MAWW corresponds to the button-up fry stage of development. Embryos were incubated at warm (mean=11.6°C) or cold (mean=7.3°C) temperatures and time between fertilization and MAWW was measured for each temperature. Model predictions of time to MAWW were within 1% of measured time to MAWW. Mean egg weight ranged from 0.101-0.136 g among females (mean = 0.116). Time to MAWW was positively related to egg size for each temperature, but the increase in time to MAWW with increasing egg size was greater for embryos reared at the warm than at the cold temperature. We developed equations accounting for the effect of egg size on time to MAWW for each temperature, and also for the mean of those temperatures (9.3°C).
Ackerman, Margareta; Ben-David, Shai; Branzei, Simina
2012-01-01
We investigate a natural generalization of the classical clustering problem, considering clustering tasks in which different instances may have different weights.We conduct the first extensive theoretical analysis on the influence of weighted data on standard clustering algorithms in both...... the partitional and hierarchical settings, characterizing the conditions under which algorithms react to weights. Extending a recent framework for clustering algorithm selection, we propose intuitive properties that would allow users to choose between clustering algorithms in the weighted setting and classify...
梅灿华; 张玉红; 胡学钢; 李培培
2011-01-01
Traditional machine learning and data mining algorithms mainly assume that the training and test data must be in the same feature space and follow the same distribution. However, in real applications, the data distributions change frequently, so those two hypotheses are hence difficult to hold. In such cases, most traditional algorithms are no longer applicable, because they usually require re-collecting and re-labeling large amounts of data, which is very expensive and time consuming. As a new framework of learning, transfer learning could effectively solve this problem by transferring the knowledge learned from one or more source domains to a target domain. This paper focuses on one of the important branches in this field, namely inductive transfer learning. Therefore, a weighted algorithm of inductive transfer learning based on maximum entropy model is proposed. It transfers the parameters of model learned from the source domain to the target domain, and meanwhile adjusts the weights of instances in the target domain to obtain the model with higher accuracy. And thus it could speed up learning process and achieve domain adaptation. The experimental results show the effectiveness of this algorithm.%传统机器学习和数据挖掘算法主要基于两个假设:训练数据集和测试数据集具有相同的特征空间和数据分布.然而在实际应用中,这两个假设却难以成立,从而导致传统的算法不再适用.迁移学习作为一种新的学习框架能有效地解决该问题.着眼于迁移学习的一个重要分支——归纳迁移学习,提出了一种基于最大熵模型的加权归纳迁移学习算法WTLME.该算法通过将已训练好的原始领域模型参数迁移到目标领域,并对目标领域实例权重进行调整,从而获得了精度较高的目标领域模型.实验结果表明了该算法的有效性.
Karan, Belgin; Pourbagher, Aysin; Torun, Nese
2016-06-01
To evaluate the correlations between the apparent diffusion coefficient (ADC) value and the standardized uptake value (SUV) with prognostic factors in breast cancer. Seventy women with invasive breast cancer (56 cases of invasive ductal carcinoma, four of mixed ductal and lobular invasive carcinoma, three of lobular invasive carcinoma, two of micropapillary carcinoma, and one each of mixed ductal and mucinous carcinoma, mucinous carcinoma, medullary carcinoma, metaplastic carcinoma, and tubular carcinoma) were included in this study. All patients underwent presurgical breast magnetic resonance imaging (MRI) with diffusion-weighted imaging (DWI) at 1.5T and whole-body (18) F-fluorodeoxyglucose ((18) F-FDG) positron emission tomography (PET) / computed tomography (CT). For all invasive breast cancers and invasive ductal carcinomas, we assessed the relationships among ADC, SUV, and pathological prognostic factors. Both the median ADC value and maximum SUV (SUVmax) were significantly associated with vascular invasion (P = 0.008 and P = 0.026, respectively). SUVmax was also significantly correlated with tumor size (P = 0.001), histological grade (P = 0.001), lymph node status (P = 0.0015), estrogen receptor status (P = 0.010), and human epidermal growth factor receptor 2 status (P = 0.020), whereas ADC values were not. The correlation between the ADC and SUVmax was not significant (P = 0.356; R = -0.112). Mucinous carcinoma showed high ADC and relatively low SUVmax. Medullary carcinoma showed low ADC and high SUVmax. When we evaluated the relationships among ADC, SUVmax, and prognostic factors in the 56 invasive ductal carcinomas, our statistical results were not significantly changed, except SUVmax was also significantly associated with progesterone receptor status (P = 0.034), but not lymph node status. SUVmax may be valuable for predicting the prognosis of breast cancer. Both ADC and SUVmax are useful to predict vascular invasion. J. Magn. Reson. Imaging 2016
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
42 CFR 50.504 - Allowable cost of drugs.
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Allowable cost of drugs. 50.504 Section 50.504... APPLICABILITY Maximum Allowable Cost for Drugs § 50.504 Allowable cost of drugs. (a) The maximum amount which may be expended from program funds for the acquisition of any drug shall be the lowest of (1)...
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
陈嘉; 李拥军; 杨文萍
2009-01-01
Objective To study the effects of carbon disulfide exposure within the national maximum allowable concentration(MAC) on blood pressure and electrocardiogram, and associations with selected factors. Methods Workers in a chemical fiber factory were divided into two groups based on the type of work: a high exposure group (HEG) of 821 individuals and a low exposure group (LEG) of 259. The CS_2 concentration at workplace was controlled under the national MAC. A set of 250 randomly selected people taking routine phys-ical check-ups in the same period and hospital constituted the control group. The systolic blood pressure (SBP) and diastolic hlood pressure (DBP) were measured on the arm, and the pulse pressure (PP) and mean arterial blood pressure (MABP) were calculated based on SBP and DBP. The blood pressure data, along with the results of the routine 12-lead electrocardiography taken at rest and records on gender, age, years of work, type of work, and concentrations of triglycerol, cholesterol, and glucose in blood, were compiled for analyses. Risk factors upon CS_2 exposure for the increase of blood pressure and occurrence of electrocardiogram abnor-malities were identified and rationalized. Results Significant difference (P<0.01) in the average values of SBP, DBP, MABP, and the corresponding abnormality incident rates was found between HEG and LEG, and between HEG and the control group. For both HEG and LEG, the incident rate of DBP abnormality(high DBP) is nearly two times as high as that of SBP. Type of work is the largest risk factor in both the high SBP and high DBP subgroups, with odds ratios (OR) of 2.086 and 2.331 respectively, and high CS_2 exposure presents more than double the risk than low exposure. On the incident rate of ECG abnormalities, beth exposure groups are significantly different (P<0.01) to the control group. High SBP in LEG and high DBP in HEG were found to be significant risk factors (OR = 3.531 and 1.638 respectively), while blood glucose
An efficient approximation algorithm for finding a maximum clique using Hopfield network learning.
Wang, Rong Long; Tang, Zheng; Cao, Qi Ping
2003-07-01
In this article, we present a solution to the maximum clique problem using a gradient-ascent learning algorithm of the Hopfield neural network. This method provides a near-optimum parallel algorithm for finding a maximum clique. To do this, we use the Hopfield neural network to generate a near-maximum clique and then modify weights in a gradient-ascent direction to allow the network to escape from the state of near-maximum clique to maximum clique or better. The proposed parallel algorithm is tested on two types of random graphs and some benchmark graphs from the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS). The simulation results show that the proposed learning algorithm can find good solutions in reasonable computation time.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
$\\ell_0$-penalized maximum likelihood for sparse directed acyclic graphs
van de Geer, Sara
2012-01-01
We consider the problem of regularized maximum likelihood estimation for the structure and parameters of a high-dimensional, sparse directed acyclic graphical (DAG) model with Gaussian distribution, or equivalently, of a Gaussian structural equation model. We show that the $\\ell_0$-penalized maximum likelihood estimator of a DAG has about the same number of edges as the minimal-edge I-MAP (a DAG with minimal number of edges representing the distribution), and that it converges in Frobenius norm. We allow the number of nodes $p$ to be much larger than sample size $n$ but assume a sparsity condition and that any representation of the true DAG has at least a fixed proportion of its non-zero edge weights above the noise level. Our results do not rely on the restrictive strong faithfulness condition which is required for methods based on conditional independence testing such as the PC-algorithm.
MaxOcc: a web portal for maximum occurrence analysis.
Bertini, Ivano; Ferella, Lucio; Luchinat, Claudio; Parigi, Giacomo; Petoukhov, Maxim V; Ravera, Enrico; Rosato, Antonio; Svergun, Dmitri I
2012-08-01
The MaxOcc web portal is presented for the characterization of the conformational heterogeneity of two-domain proteins, through the calculation of the Maximum Occurrence that each protein conformation can have in agreement with experimental data. Whatever the real ensemble of conformations sampled by a protein, the weight of any conformation cannot exceed the calculated corresponding Maximum Occurrence value. The present portal allows users to compute these values using any combination of restraints like pseudocontact shifts, paramagnetism-based residual dipolar couplings, paramagnetic relaxation enhancements and small angle X-ray scattering profiles, given the 3D structure of the two domains as input. MaxOcc is embedded within the NMR grid services of the WeNMR project and is available via the WeNMR gateway at http://py-enmr.cerm.unifi.it/access/index/maxocc . It can be used freely upon registration to the grid with a digital certificate.
Weighted approximation with varying weight
Totik, Vilmos
1994-01-01
A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.
İbrahim Ümran Akdağcık
2014-02-01
Full Text Available The objective of this research is to find the weight of 1 Repetation Maximum (1RM in reality by using the method of 3, 6 and 10 Repetation Maximums. For this reason, the group of 45 men subjects engaged in sport actively whose ages between (ort= 18±0.6396, height (ort= 174.37±4.44, weight (ort= 62.91±6.77 and somatotype values (1.43±0.14 / 4.7±1.33 / 3.5±0.90 were constituted. The regresion formulas for prediction of 1 Repetation Maximum was improved in bench press by using the method of 1, 3, 6 and 10 Repetation Maximums. Besides, it was not found statistical difference (p>0.01 between 1 Repetation Maximum found by regresion formulas (Conjectural 1 Repetetion Maximum and 1 Repetation Maximum found in reality. More, the proportion of the weight of 3, 6 and 10 Repetation Maximums to 1 Repetation Maximum was calculated (3TM %91.95, 6TM %83.37, 10TM %71.46. As a result, the regresion formula whose standard foult (sh= ±1.120 is minimum was found the formula of getting with the method of 3 Repetation Maximum (3RM. Y = 1.619 + (1.062 * 3 RM ÖzetBu çalışmanın amacı 3, 6 ve 10 tekrarlı maksimum yöntemi kullanarak, gerçekte kaldırılabilecek bir tekrardaki maksimum ağırlığı bulmaktır. Bu amaçla yaşları 17-19 ( ort= 18±0.6396 , boyları 165-183 cm ( ort= 174.37±4.44 , kiloları 50-75 (ort= 62.91±6.77 ve somatotip değerleri (1.43±0.14 / 4.7±1.33 / 3.5±0.90 olan 45 sporla aktif uğraşan erkek deneğin, bench press’teki 1,3, 6 ve 10 tekrarda kaldırılan maksimal ağırlıkları bulunarak, bir tekrarda kaldırılabilecek maksimal ağırlığı (1TM tahmin etmek için regresyon formülleri geliştirilmiştir. Ayrıca deneklerin 3, 6 ve 10 tekrarda kaldırdıkları maksimal ağırlıklardan elde edilen tahmini 1TM ile gerçekte kaldırılan 1TM arasında istatistiksel olarak fark bulunamamıştır ( p> 0.01 . Yine 3, 6 ve 10 tekrarda kaldırılan maksimal ağırlığın, bir tekrarda kaldırılan maksimal a
... Anger Weight Management Weight Management Smoking and Weight Healthy Weight Loss Being Comfortable in Your Own Skin Your Weight Loss Expectations & Goals Healthier Lifestyle Healthier Lifestyle Physical Fitness Food & Nutrition Sleep, Stress & Relaxation Emotions & Relationships HealthyYouTXT ...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Simone Bittencourt
2006-06-01
Full Text Available Para implementação e operacionalização da política brasileira de recursos hídricos, é imprescindível o uso de ferramentas de planejamento que considerem o efeito de todas as atividades ou processos que causam ou contribuem para a degradação da qualidade de um corpo d'água. Neste sentido, aplicou-se o processo TMDL (total maximum daily load, desenvolvido pela Agência de Proteção Ambiental dos Estados Unidos (EPA, para o P, na área de drenagem de contribuição ao futuro reservatório Piraquara II, bacia hidrográfica do rio Piraquara, Paraná. O processo TMDL determina a quantidade máxima de cargas de um poluente que um corpo d'água pode receber sem violar os padrões estabelecidos de qualidade da água e aloca cargas deste poluente entre fontes de poluição pontuais e difusas. No presente estudo, utilizou-se o método TMDL, com o objetivo de demonstrar ser ele uma ferramenta útil no processo de gestão dos recursos hídricos. Simularam-se cenários de uso do solo, por meio de modelagem matemática, até obter-se uma concentração de P total no reservatório abaixo da faixa limite para ocorrência de eutrofização, de 0,025 a 0,10 mg L-1, estabelecida no estudo. Realizou-se uma simulação de uso atual do solo, visando prever a condição inicial de qualidade da água no corpo d'água, na qual a concentração de P total no reservatório resultante não atendeu ao padrão estabelecido. Procedeu-se a uma segunda simulação com adoção das medidas de controle, recomposição de mata ciliar e plantio direto, para reduzir a exportação de carga de P total da bacia. Obteve-se uma melhoria na qualidade da água do reservatório, indicando que as medidas adotadas foram suficientes para atingir o padrão estabelecido, o que demonstra a aplicabilidade do método.For the implementation and operation of the Brazilian Federal law on water resources of 1997 it is indispensable to use planning tools that take into account the effect of
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Quantum theory allows for absolute maximal contextuality
Amaral, Barbara; Cunha, Marcelo Terra; Cabello, Adán
2015-12-01
Contextuality is a fundamental feature of quantum theory and a necessary resource for quantum computation and communication. It is therefore important to investigate how large contextuality can be in quantum theory. Linear contextuality witnesses can be expressed as a sum S of n probabilities, and the independence number α and the Tsirelson-like number ϑ of the corresponding exclusivity graph are, respectively, the maximum of S for noncontextual theories and for the theory under consideration. A theory allows for absolute maximal contextuality if it has scenarios in which ϑ /α approaches n . Here we show that quantum theory allows for absolute maximal contextuality despite what is suggested by the examination of the quantum violations of Bell and noncontextuality inequalities considered in the past. Our proof is not constructive and does not single out explicit scenarios. Nevertheless, we identify scenarios in which quantum theory allows for almost-absolute-maximal contextuality.
Paulo Henrique Siqueira
2004-08-01
Full Text Available O objetivo deste trabalho é mostrar a aplicação do Algoritmo do Matching de peso máximo, na elaboração de jornadas de trabalho para motoristas e cobradores de ônibus. Este problema deve ser resolvido levando-se em consideração o maior aproveitamento possível das tabelas de horários, com o objetivo de minimizar o número de funcionários, de horas extras e de horas ociosas. Desta forma, os custos das companhias de transporte público são minimizados. Na primeira fase do trabalho, supondo-se que as tabelas de horários já estejam divididas em escalas de curta e de longa duração, as escalas de curta duração são combinadas para a formação da jornada diária de trabalho de um funcionário. Esta combinação é feita com o Algoritmo do Matching de peso máximo, no qual as escalas são representadas por vértices de um grafo, e o peso máximo é atribuído às combinações de escalas que não formam horas extras e horas ociosas. Na segunda fase, uma jornada de final de semana é designada para cada jornada semanal de dias úteis. Por meio destas duas fases, as jornadas semanais de trabalho para motoristas e cobradores de ônibus podem ser construídas com custo mínimo. A terceira e última fase deste trabalho consiste na designação das jornadas semanais de trabalho para cada motorista e cobrador de ônibus, considerando-se suas preferências. O Algoritmo do Matching de peso máximo é utilizado para esta fase também. Este trabalho foi aplicado em três empresas de transporte público da cidade de Curitiba - PR, nas quais os algoritmos utilizados anteriormente eram heurísticos, baseados apenas na experiência do encarregado por esta tarefa.The purpose of this paper is to discuss how the maximum weight Matching Algorithm can be applied to schedule the workdays of bus drivers and bus fare collectors. This scheduling should be based on the best possible use of timetables in order to minimize the number of employees, overtime and
40 CFR 35.2025 - Allowance and advance of allowance.
2010-07-01
... facilities planning and design of the project and Step 7 agreements will include an allowance for facility planning in accordance with appendix B of this subpart. (b) Advance of allowance to potential grant... grant applicants for facilities planning and project design. (2) The State may request that the right to...
2010-10-01
...) of this section, additional design features, such as mechanical or composite crack arrestors and/or... surface of the plate/coil or pipe to identify imperfections that impair serviceability such as laminations... must be a hardness test, using Vickers (Hv10) hardness test method or equivalent test method, to...
5 CFR 591.104 - Higher initial maximum uniform allowance rate.
2010-01-01
... specific items required for the basic uniform and the average total uniform cost for the affected employees... requirement. (e) So that OPM can evaluate agencies' use of this authority and provide the Congress and others... initial year a new style or type of minimum basic uniform is required for a category of employees, an...
49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.
2010-10-01
... integrity of the coating using direct current voltage gradient (DCVG) or alternating current voltage... pipeline segment. (ii) To address interference currents, perform the following: (A) Conduct an interference survey to detect the presence and level of any electrical current that could impact external...
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
... Health Information Weight Management English English Español Weight Management Obesity is a chronic condition that affects more ... Liver (NASH) Heart Disease & Stroke Sleep Apnea Weight Management Topics About Food Portions Bariatric Surgery for Severe ...
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
Xue, Bingtian; Larsen, Kim Guldstrand; Mardare, Radu Iulian
2015-01-01
We introduce Concurrent Weighted Logic (CWL), a multimodal logic for concurrent labeled weighted transition systems (LWSs). The synchronization of LWSs is described using dedicated functions that, in various concurrency paradigms, allow us to encode the compositionality of LWSs. To reflect these......-completeness results for this logic. To complete these proofs we involve advanced topological techniques from Model Theory....
Ackerman, Margareta; Branzei, Simina; Loker, David
2011-01-01
In this paper we investigate clustering in the weighted setting, in which every data point is assigned a real valued weight. We conduct a theoretical analysis on the influence of weighted data on standard clustering algorithms in each of the partitional and hierarchical settings, characterising the precise conditions under which such algorithms react to weights, and classifying clustering methods into three broad categories: weight-responsive, weight-considering, and weight-robust. Our analysis raises several interesting questions and can be directly mapped to the classical unweighted setting.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
Peer Effects, Fast Food Consumption and Adolescent Weight Gain
Fortin, Bernard; Yazbeck, Myra
2015-01-01
This paper aims at opening the black box of peer effects in adolescent weight gain. Using Add Health data on secondary schools in the U.S., we investigate whether these effects partly flow through the eating habits channel. Adolescents are assumed to interact through a friendship social network. We propose a two-equation model. The first equation provides a social interaction model of fast food consumption. To estimate this equation we use a quasi maximum likelihood approach that allows us to...
... obese. Achieving a healthy weight can help you control your cholesterol, blood pressure and blood sugar. It ... use more calories than you eat. A weight-control strategy might include Choosing low-fat, low-calorie ...
MaxOcc: a web portal for maximum occurrence analysis
Bertini, Ivano, E-mail: ivanobertini@cerm.unifi.it; Ferella, Lucio; Luchinat, Claudio, E-mail: luchinat@cerm.unifi.it; Parigi, Giacomo [Magnetic Resonance Center (CERM), University of Florence (Italy); Petoukhov, Maxim V. [EMBL, Hamburg Outstation (Germany); Ravera, Enrico; Rosato, Antonio [Magnetic Resonance Center (CERM), University of Florence (Italy); Svergun, Dmitri I. [EMBL, Hamburg Outstation (Germany)
2012-08-15
The MaxOcc web portal is presented for the characterization of the conformational heterogeneity of two-domain proteins, through the calculation of the Maximum Occurrence that each protein conformation can have in agreement with experimental data. Whatever the real ensemble of conformations sampled by a protein, the weight of any conformation cannot exceed the calculated corresponding Maximum Occurrence value. The present portal allows users to compute these values using any combination of restraints like pseudocontact shifts, paramagnetism-based residual dipolar couplings, paramagnetic relaxation enhancements and small angle X-ray scattering profiles, given the 3D structure of the two domains as input. MaxOcc is embedded within the NMR grid services of the WeNMR project and is available via the WeNMR gateway at http://py-enmr.cerm.unifi.it/access/index/maxocchttp://py-enmr.cerm.unifi.it/access/index/maxocc. It can be used freely upon registration to the grid with a digital certificate.
... baby, taken just after he or she is born. A low birth weight is less than 5.5 pounds. A high ... weight is more than 8.8 pounds. A low birth weight baby can be born too small, too early (premature), or both. This ...
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Control system for maximum use of adhesive forces of a railway vehicle in a tractive mode
Spiryagin, Maksym; Lee, Kwan Soo; Yoo, Hong Hee
2008-04-01
The realization of maximum adhesive forces for a railway vehicle is a very difficult process, because it involves using tractive efforts and depends on friction characteristics in the contact zone between wheels and rails. Tractive efforts are realized by means of tractive torques of motors, and their maximum values can provide negative effects such as slip and skid. These situations usually happen when information about friction conditions is lacking. The negative processes have a major influence on wearing of contact bodies and tractive units. Therefore, many existing control systems for vehicles use an effect of a prediction of a friction coefficient between wheels and rails because measuring a friction coefficient at the moment of running vehicle movement is very difficult. One of the ways to solve this task is to use noise spectrum analysis for friction coefficient detection. This noise phenomenon has not been clearly studied and analyzed. In this paper, we propose an adhesion control system of railway vehicles based on an observer, which allows one to determine the maximum tractive torque based on the optimal adhesive force between the wheels (wheel pair) of a railway vehicle and rails (rail track) depending on weight load from a wheel to a rail, friction conditions in the contact zone, a lateral displacement of wheel set and wheel sleep. As a result, it allows a railway vehicle to be driven in a tractive mode by the maximum adhesion force for real friction conditions.
Vietnam recommended dietary allowances 2007.
Khan, Nguyen Cong; Hoan, Pham Van
2008-01-01
It has been well acknowledged that Vietnam is undergoing a nutrition transition. With a rapid change in the country's reform and economic growth, food supply at the macronutrient level has improved. Changes of the Vietnamese diet include significantly more foods of animal origin, and an increase of fat/oils, and ripe fruits. Consequently, nutritional problems in Vietnam now include not only malnutrition but also overweight/obesity, metabolic syndrome and other chronic diseases related to nutrition and lifestyles. The recognition of these shifts, which is also associated with morbidity and mortality, was a major factor in the need to review and update the Recommended Dietary Allowances (RDA) for the Vietnamese population. This revised RDA established an important science-based tool for evaluation of nutrition adequacy, for teaching, and for scientific communications within Vietnam. It is expected that the 2007 Vietnam RDA and its conversion to food-based dietary guidelines will facilitate education to the public, as well as the policy implementation of programs for prevention of non-communicable chronic diseases and addressing the double burden of both under and over nutrition.
Multiresolution maximum intensity volume rendering by morphological adjunction pyramids
Roerdink, Jos B.T.M.
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Multiresolution Maximum Intensity Volume Rendering by Morphological Adjunction Pyramids
Roerdink, Jos B.T.M.
2001-01-01
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
An application of Hamiltonian neurodynamics using Pontryagin's Maximum (Minimum) Principle.
Koshizen, T; Fulcher, J
1995-12-01
Classical optimal control methods, notably Pontryagin's Maximum (Minimum) Principle (PMP) can be employed, together with Hamiltonians, to determine optimal system weights in Artificial Neural dynamical systems. A new learning rule based on weight equations derived using PMP is shown to be suitable for both discrete- and continuous-time systems, and moreover, can also be applied to feedback networks. Preliminary testing shows that this PMP learning rule compares favorably with Standard BackPropagations (SBP) on the XOR problem.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Optimum Weight Design of Functionally Graded Material Gears
JING Shikai; ZHANG He; ZHOU Jingtao; SONG Guohua
2015-01-01
Traditional gear weight optimization methods consider gear tooth number, module, face width or other dimension parameters of gear as design variables. However, due to the complicated form and geometric features peculiar to the gear, there will be large amounts of design parameters in gear design, and the influences of gear parameters changing on gear trains, transmission system and the whole equipment have to be taken into account, which increases the complexity of optimization problem. This paper puts forward to apply functionally graded materials (FGMs) to gears and then conduct the optimization. According to the force situation of gears, the material distribution form of FGM gears is determined. Then based on the performance parameters analysis of FGMs and the practical working demands for gears, a multi-objective optimization model is formed. Finally by using the goal driven optimization (GDO) method, the optimal material distribution is achieved, which makes gear weight and the maximum deformation be minimum and the maximum bending stress do not exceed the allowable stress. As an example, the applying of FGM to automotive transmission gear is conducted to illustrate the optimization design process and the result shows that under the condition of keeping the normal working performance of gear, the method achieves in greatly reducing the gear weight. This research proposes a FGM gears design method that is able to largely reduce the weight of gears by optimizing the microscopic material parameters instead of changing the macroscopic dimension parameters of gears, which reduces the complexity of gear weight optimization problem.
Mechanical Sun-Tracking Technique Implemented for Maximum ...
The solar panel is allowed to move from east to west and back forth with a maximum allowable angle of 180o. Its movement is in only one axis. The prototype built carries the panel from eastward to westward tracking the sun movement from ...
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Weight limits. 25.25 Section 25.25 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Flight General § 25.25 Weight limits. (a) Maximum weights....
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Weight limits. 27.25 Section 27.25 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Flight General § 27.25 Weight limits. (a) Maximum weight. The...
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Weight limits. 29.25 Section 29.25 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY ROTORCRAFT Flight General § 29.25 Weight limits. (a) Maximum weight....
On minimizing the maximum broadcast decoding delay for instantly decodable network coding
Douik, Ahmed S.
2014-09-01
In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a definition of delay for IDNC allows a more equitable distribution of the delays between the different receivers and thus a better Quality of Service (QoS). In order to solve this problem, we first derive the expressions for the probability distributions of maximum decoding delay increments. Given these expressions, we formulate the problem as a maximum weight clique problem in the IDNC graph. Although this problem is known to be NP-hard, we design a greedy algorithm to perform effective packet selection. Through extensive simulations, we compare the sum decoding delay and the max decoding delay experienced when applying the policies to minimize the sum decoding delay and our policy to reduce the max decoding delay. Simulations results show that our policy gives a good agreement among all the delay aspects in all situations and outperforms the sum decoding delay policy to effectively minimize the sum decoding delay when the channel conditions become harsher. They also show that our definition of delay significantly improve the number of served receivers when they are subject to strict delay constraints.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Van Houtven, George; Johnson, F Reed; Kilambi, Vikram; Hauber, A Brett
2011-01-01
This study applies conjoint analysis to estimate health-related benefit-risk tradeoffs in a non-expected-utility framework. We demonstrate how this method can be used to test for and estimate nonlinear weighting of adverse-event probabilities and we explore the implications of nonlinear weighting on maximum acceptable risk (MAR) measures of risk tolerance. We obtained preference data from 570 Crohn's disease patients using a web-enabled conjoint survey. Respondents were presented with choice tasks involving treatment options that involve different efficacy benefits and different mortality risks for 3 possible side effects. Using conditional logit maximum likelihood estimation, we estimate preference parameters using 3 models that allow for nonlinear preference weighting of risks--a categorical model, a simple-weighting model, and a rank dependent utility (RDU) model. For the second 2 models we specify and jointly estimate 1- and 2-parameter probability weighting functions. Although the 2-parameter functions are more flexible, estimation of the 1-parameter functions generally performed better. Despite well-known conceptual limitations, the simple-weighting model allows us to estimate weighting function parameters that vary across 3 risk types, and we find some evidence of statistically significant differences across risks. The parameter estimates from RDU model with the single-parameter weighting function provide the most robust estimates of MAR. For an improvement in Crohn's symptom severity from moderate and mild, we estimate maximum 10-year mortality risk tolerances ranging from 2.6% to 7.1%. Our results provide further the evidence that quantitative benefit-risk analysis used to evaluate medical interventions should account explicitly for the nonlinear probability weighting of preferences.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
McConnel, Craig S; McNeil, Ashleigh A; Hadrich, Joleen C; Lombard, Jason E; Garry, Franklyn B; Heller, Jane
2017-08-01
Over the past 175 years, data related to human disease and death have progressed to a summary measure of population health, the Disability-Adjusted Life Year (DALY). As dairies have intensified there has been no equivalent measure of the impact of disease on the productive life and well-being of animals. The development of a disease-adjusted metric requires a consistent set of disability weights that reflect the relative severity of important diseases. The objective of this study was to use an international survey of dairy authorities to derive disability weights for primary disease categories recorded on dairies. National and international dairy health and management authorities were contacted through professional organizations, dairy industry publications and conferences, and industry contacts. Estimates of minimum, most likely, and maximum disability weights were derived for 12 common dairy cow diseases. Survey participants were asked to estimate the impact of each disease on overall health and milk production. Diseases were classified from 1 (minimal adverse effects) to 10 (death). The data was modelled using BetaPERT distributions to demonstrate the variation in these dynamic disease processes, and to identify the most likely aggregated disability weights for each disease classification. A single disability weight was assigned to each disease using the average of the combined medians for the minimum, most likely, and maximum severity scores. A total of 96 respondents provided estimates of disability weights. The final disability weight values resulted in the following order from least to most severe: retained placenta, diarrhea, ketosis, metritis, mastitis, milk fever, lame (hoof only), calving trauma, left displaced abomasum, pneumonia, musculoskeletal injury (leg, hip, back), and right displaced abomasum. The peaks of the probability density functions indicated that for certain disease states such as retained placenta there was a relatively narrow range of
Assessing allowable take of migratory birds
Runge, M.C.; Sauer, J.R.; Avery, M.L.; Blackwell, B.F.; Koneff, M.D.
2009-01-01
Legal removal of migratory birds from the wild occurs for several reasons, including subsistence, sport harvest, damage control, and the pet trade. We argue that harvest theory provides the basis for assessing the impact of authorized take, advance a simplified rendering of harvest theory known as potential biological removal as a useful starting point for assessing take, and demonstrate this approach with a case study of depredation control of black vultures (Coragyps atratus) in Virginia, USA. Based on data from the North American Breeding Bird Survey and other sources, we estimated that the black vulture population in Virginia was 91,190 (95% credible interval = 44,520?212,100) in 2006. Using a simple population model and available estimates of life-history parameters, we estimated the intrinsic rate of growth (rmax) to be in the range 7?14%, with 10.6% a plausible point estimate. For a take program to seek an equilibrium population size on the conservative side of the yield curve, the rate of take needs to be less than that which achieves a maximum sustained yield (0.5 x rmax). Based on the point estimate for rmax and using the lower 60% credible interval for population size to account for uncertainty, these conditions would be met if the take of black vultures in Virginia in 2006 was <3,533 birds. Based on regular monitoring data, allowable harvest should be adjusted annually to reflect changes in population size. To initiate discussion about how this assessment framework could be related to the laws and regulations that govern authorization of such take, we suggest that the Migratory Bird Treaty Act requires only that take of native migratory birds be sustainable in the long-term, that is, sustained harvest rate should be
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
46 CFR 154.421 - Allowable stress.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Allowable stress. 154.421 Section 154.421 Shipping COAST... § 154.421 Allowable stress. The allowable stress for the integral tank structure must meet the American Bureau of Shipping's allowable stress for the vessel's hull published in “Rules for Building and Classing...
46 CFR 154.440 - Allowable stress.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Allowable stress. 154.440 Section 154.440 Shipping COAST... Tank Type A § 154.440 Allowable stress. (a) The allowable stresses for an independent tank type A must... Commandant (CG-522). (b) A greater allowable stress than required in paragraph (a)(1) of this section may be...
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
76 FR 13580 - Bus Testing; Calculation of Average Passenger Weight and Test Vehicle Weight
2011-03-14
... weight rating (GAWR). Instead, buses were loaded to the maximum weight rating and a notation was made in... scientific data. FTA's earlier selection of the 150 pound passenger weight assumption was based on the number... modern scientific data, and provides flexibility and freedom of choice for the affected entities. The bus...
Clean Air Markets - Allowances Query Wizard
U.S. Environmental Protection Agency — The Allowances Query Wizard is part of a suite of Clean Air Markets-related tools that are accessible at http://camddataandmaps.epa.gov/gdm/index.cfm. The Allowances...
Allowance Holdings and Transfers Data Inventory
U.S. Environmental Protection Agency — The Allowance Holdings and Transfers Data Inventory contains measured data on holdings and transactions of allowances under the NOx Budget Trading Program (NBP), a...
46 CFR 154.428 - Allowable stress.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Allowable stress. 154.428 Section 154.428 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS FOR... § 154.428 Allowable stress. The membrane tank and the supporting insulation must have allowable stresses...
46 CFR 154.447 - Allowable stress.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Allowable stress. 154.447 Section 154.447 Shipping COAST... Tank Type B § 154.447 Allowable stress. (a) An independent tank type B designed from bodies of revolution must have allowable stresses 3 determined by the following formulae: 3 See Appendix B for stress...
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
On the Maximum Storage Capacity of the Hopfield Model
Folli, Viola; Leonetti, Marco; Ruocco, Giancarlo
2017-01-01
Recurrent neural networks (RNN) have traditionally been of great interest for their capacity to store memories. In past years, several works have been devoted to determine the maximum storage capacity of RNN, especially for the case of the Hopfield network, the most popular kind of RNN. Analyzing the thermodynamic limit of the statistical properties of the Hamiltonian corresponding to the Hopfield neural network, it has been shown in the literature that the retrieval errors diverge when the number of stored memory patterns (P) exceeds a fraction (≈ 14%) of the network size N. In this paper, we study the storage performance of a generalized Hopfield model, where the diagonal elements of the connection matrix are allowed to be different from zero. We investigate this model at finite N. We give an analytical expression for the number of retrieval errors and show that, by increasing the number of stored patterns over a certain threshold, the errors start to decrease and reach values below unit for P ≫ N. We demonstrate that the strongest trade-off between efficiency and effectiveness relies on the number of patterns (P) that are stored in the network by appropriately fixing the connection weights. When P≫N and the diagonal elements of the adjacency matrix are not forced to be zero, the optimal storage capacity is obtained with a number of stored memories much larger than previously reported. This theory paves the way to the design of RNN with high storage capacity and able to retrieve the desired pattern without distortions. PMID:28119595
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
The Role of Weight Shrinking in Large Margin Perceptron Learning
Panagiotakopoulos, Constantinos
2012-01-01
We introduce into the classical perceptron algorithm with margin a mechanism that shrinks the current weight vector as a first step of the update. If the shrinking factor is constant the resulting algorithm may be regarded as a margin-error-driven version of NORMA with constant learning rate. In this case we show that the allowed strength of shrinking depends on the value of the maximum margin. We also consider variable shrinking factors for which there is no such dependence. In both cases we obtain new generalizations of the perceptron with margin able to provably attain in a finite number of steps any desirable approximation of the maximal margin hyperplane. The new approximate maximum margin classifiers appear experimentally to be very competitive in 2-norm soft margin tasks involving linear kernels.
Finding All Allowed Edges in a Bipartite Graph
Tassa, Tamir
2011-01-01
We consider the problem of finding all allowed edges in a bipartite graph $G=(V,E)$, i.e., all edges that are included in some maximum matching. We show that given any maximum matching in the graph, it is possible to perform this computation in linear time $O(n+m)$ (where $n=|V|$ and $m=|E|$). Hence, the time complexity of finding all allowed edges reduces to that of finding a single maximum matching, which is $O(n^{1/2}m)$ [Hopcroft and Karp 1973], or $O((n/\\log n)^{1/2}m)$ for dense graphs with $m=\\Theta(n^2)$ [Alt et al. 1991]. This time complexity improves upon that of the best known algorithms for the problem, which is $O(nm)$ ([Costa 1994] for bipartite graphs, and [Carvalho and Cheriyan 2005] for general graphs). Other algorithms for solving that problem are randomized algorithms due to [Rabin and Vazirani 1989] and [Cheriyan 1997], the runtime of which is $\\tilde{O}(n^{2.376})$. Our algorithm, apart from being deterministic, improves upon that time complexity for bipartite graphs when $m=O(n^r)$ and $...
The constraint rule of the maximum entropy principle
Uffink, J.
2001-01-01
The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distribut
20 CFR 617.47 - Moving allowance.
2010-04-01
... Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR TRADE ADJUSTMENT ASSISTANCE... goods and personal effects of an individual and family, if any, shall not exceed the maximum number of... include the cost of insuring such goods and effects for their actual value or $10,000, whichever is...
45 CFR 74.27 - Allowable costs.
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false Allowable costs. 74.27 Section 74.27 Public..., AND COMMERCIAL ORGANIZATIONS Post-Award Requirements Financial and Program Management § 74.27 Allowable costs. (a) For each kind of recipient, there is a particular set of Federal principles...
28 CFR 100.11 - Allowable costs.
2010-07-01
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Allowable costs. 100.11 Section 100.11 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) COST RECOVERY REGULATIONS, COMMUNICATIONS ASSISTANCE FOR LAW ENFORCEMENT ACT OF 1994 § 100.11 Allowable costs. (a) Costs that are eligible...
20 CFR 633.303 - Allowable costs.
2010-04-01
... occupation trained for and at not less than the wage specified in the agreement. (g) Travel costs. (1) The... to the overall administrative cost ceiling. (i) Allowances and reimbursements for board and advisory... grantee per quarter. (2) Allowances and loss of wages. Any individual or family member who is a member of...
75 FR 4098 - Utility Allowance Adjustments
2010-01-26
... URBAN DEVELOPMENT Utility Allowance Adjustments AGENCY: Office of the Chief Information Officer, HUD... are required to advise the Secretary of the need for and request of a new utility allowance for... whether the information will have practical utility; (2) Evaluate the accuracy of the agency's estimate...
44 CFR 13.22 - Allowable costs.
2010-10-01
... uniform cost accounting standards that comply with cost principles acceptable to the Federal agency. ... STATE AND LOCAL GOVERNMENTS Post-Award Requirements Financial Administration § 13.22 Allowable costs. (a... increment above allowable costs) to the grantee or subgrantee. (b) Applicable cost principles. For each...
32 CFR 33.22 - Allowable costs.
2010-07-01
... accounting standards that comply with cost principles acceptable to the Federal agency. ... Post-Award Requirements Financial Administration § 33.22 Allowable costs. (a) Limitation on use of... allowable costs) to the grantee or subgrantee. (b) Applicable cost principles. For each kind of...
36 CFR 1207.22 - Allowable costs.
2010-07-01
... uniform cost accounting standards that comply with cost principles acceptable to the Federal agency. ... GOVERNMENTS Post-Award Requirements Financial Administration § 1207.22 Allowable costs. (a) Limitation on use... increment above allowable costs) to the grantee or subgrantee. (b) Applicable cost principles. For each...
34 CFR 74.27 - Allowable costs.
2010-07-01
... Procedures or uniform cost accounting standards that comply with cost principles acceptable to ED. (b) The... OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial... principles for determining allowable costs. Allowability of costs are determined in accordance with the...
5 CFR 180.104 - Allowable claims.
2010-01-01
... mobile homes may be allowed only in cases of collision, theft, or vandalism. (5) Money. Claims for money... claimant's supervisor. (4) Mobile homes. Claims may be allowed for damage to or loss of mobile homes and their contents under the provisions of § 180.104(c)(2). Claims for structural damage to mobile...
38 CFR 49.27 - Allowable costs.
2010-07-01
... ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial and Program Management § 49.27 Allowable...-Profit Organizations.” The allowability of costs incurred by institutions of higher education...
20 CFR 435.27 - Allowable costs.
2010-04-01
... AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, OTHER NON-PROFIT ORGANIZATIONS, AND COMMERCIAL ORGANIZATIONS Post-Award Requirements Financial and Program Management § 435.27 Allowable costs. For each kind... Organizations.” (c) Allowability of costs incurred by institutions of higher education is determined...
28 CFR 70.27 - Allowable costs.
2010-07-01
... AND AGREEMENTS (INCLUDING SUBAWARDS) WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial and Program Management § 70.27 Allowable costs. (a... Organizations.” The allowability of costs incurred by institutions of higher education is determined...
15 CFR 14.27 - Allowable costs.
2010-01-01
... GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, OTHER NON-PROFIT, AND COMMERCIAL ORGANIZATIONS Post-Award Requirements Financial and Program Management § 14.27 Allowable costs. For each kind of... Organizations.” The allowability of costs incurred by institutions of higher education is determined...
24 CFR 17.43 - Allowable claims.
2010-04-01
... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Allowable claims. 17.43 Section 17.43 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development..., superior authority. (6) Clothing and accessories. Claims may be allowed for damage to, or loss of, clothing...
29 CFR 1470.22 - Allowable costs.
2010-07-01
... to that circular 48 CFR part 31. Contract Cost Principles and Procedures, or uniform cost accounting... grantee or subgrantee. (b) Applicable cost principles. For each kind of organization, there is a set of Federal principles for determining allowable costs. Allowable costs will be determined in accordance...
45 CFR 2541.220 - Allowable costs.
2010-10-01
... accounting standards that comply with cost principles acceptable to the Federal agency. ... the grantee or subgrantee. (b) Applicable cost principles. For each kind of organization, there is a set of Federal principles for determining allowable costs. Allowable costs will be determined...
Bachoc, Christine; Cohen, Gerard; Sole, Patrick; Tchamkerten, Aslan
2010-01-01
The maximum size of a binary code is studied as a function of its length N, minimum distance D, and minimum codeword weight W. This function B(N,D,W) is first characterized in terms of its exponential growth rate in the limit as N tends to infinity for fixed d=D/N and w=W/N. The exponential growth rate of B(N,D,W) is shown to be equal to the exponential growth rate of A(N,D) for w <= 1/2, and equal to the exponential growth rate of A(N,D,W) for 1/2< w <= 1. Second, analytic and numerical upper bounds on B(N,D,W) are derived using the semidefinite programming (SDP) method. These bounds yield a non-asymptotic improvement of the second Johnson bound and are tight for certain values of the parameters.
Keynes, family allowances and Keynesian economic policy
Pressman, Steven
2014-01-01
This paper provides a short history of family allowances and documents the fact that Keynes supported family allowances as early as the 1920s, continuing through the 1930s and early 1940s. Keynes saw this policy as a way to help households raise their children and also as a way to increase consumption without reducing business investment. The paper goes on to argue that a policy of family allowances is consistent with Keynesian economics. Finally, the paper uses the Luxembourg Income Study to...
Prediction of three dimensional maximum isometric neck strength.
Fice, Jason B; Siegmund, Gunter P; Blouin, Jean-Sébastien
2014-09-01
We measured maximum isometric neck strength under combinations of flexion/extension, lateral bending and axial rotation to determine whether neck strength in three dimensions (3D) can be predicted from principal axes strength. This would allow biomechanical modelers to validate their neck models across many directions using only principal axis strength data. Maximum isometric neck moments were measured in 9 male volunteers (29±9 years) for 17 directions. The 3D moments were normalized by the principal axis moments, and compared to unity for all directions tested. Finally, each subject's maximum principal axis moments were used to predict their resultant moment in the off-axis directions. Maximum moments were 30±6 N m in flexion, 32±9 N m in lateral bending, 51±11 N m in extension, and 13±5 N m in axial rotation. The normalized 3D moments were not significantly different from unity (95% confidence interval contained one), except for three directions that combined ipsilateral axial rotation and lateral bending; in these directions the normalized moments exceeded one. Predicted resultant moments compared well to the actual measured values (r2=0.88). Despite exceeding unity, the normalized moments were consistent across subjects to allow prediction of maximum 3D neck strength using principal axes neck strength.
An electronic voting system supporting vote weights
Eliasson, C.; Zúquete, A.
2006-01-01
Typically each voter contributes with one vote for an election. But there are some elections where voters can have different weights associated with their vote. In this paper we provide a solution for allowing an arbitrary number of weights and weight values to be used in an electronic voting system. We chose REVS, Robust Electronic Voting System, a voting system designed to support Internet voting processes, as the start point for studying the introduction of vote weights. To the best of our...
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Sign Patterns That Allow the Given Matrix
邵燕灵; 孙良
2003-01-01
Let P be a property referring to a real matrix. For a sign pattern A, if there exists a real matrix B in the qualitative class of A such that B has property P, then we say A allows P. Three cases that A allows an M-matrix, an inverse M-matrix and a P0-matrix are considered. The complete characterizations are obtained.
On the 2m-variable symmetric Boolean functions with maximum algebraic immunity
QU LongJiang; LI Chao
2008-01-01
The properties of the 2m-variable symmetric Boolean functions with maximum al-gebraic immunity are studied in this paper. Their value vectors, algebraic normal forms, and algebraic degrees and weights are all obtained. At last, some necessary conditions for a symmetric Boolean function on even number variables to have maximum algebraic immunity are introduced.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Regulatory treatment of allowances and compliance costs
Rose, K. [National Regulatory Research Institute, Columbus, OH (United States)
1993-07-01
The Clean Air Act Amendments of 1990 (CAAA) established a national emission allowance trading system, a market-based form of environmental regulation designed to reduce and limit sulfur dioxide emissions. However, the allowance trading system is being applied primarily to an economically regulated electric utility industry. The combining of the new form of environmental regulation and economic regulation of electric utilities has raised a number of questions including what the role should be of the federal and state utility regulating commissions and how those actions will affect the decision making process of the utilities and the allowance market. There are several dimensions to the regulatory problems that commissions face. Allowances and utility compliance expenditures have implications for least-cost/IPR (integrated resource planning), prudence review procedures, holding company and multistate utility regulation and ratemaking treatment. The focus of this paper is on the ratemaking treatment. The following topics are covered: ratemaking treatment of allowances and compliance costs; Traditional cost-recovery mechanisms; limitations to the traditional approach; traditional approach and the allowance trading market; market-based cost recovery mechanisms; methods of determining the benchmark; determining the split between ratepayers and the utility; other regulatory approaches; limitations of incentive mechanisms.
Generalized Relativistic Wave Equations with Intrinsic Maximum Momentum
Ching, Chee Leong
2013-01-01
We examine the nonperturbative effect of maximum momentum on the relativistic wave equations. In momentum representation, we obtain the exact eigen-energies and wavefunctions of one-dimensional Klein-Gordon and Dirac equation with linear confining potentials, and the Dirac oscillator. Bound state solutions are only possible when the strength of scalar potential are stronger than vector potential. The energy spectrum of the systems studied are bounded from above, whereby classical characteristics are observed in the uncertainties of position and momentum operators. Also, there is a truncation in the maximum number of bound states that is allowed. Some of these quantum-gravitational features may have future applications.
Generalized relativistic wave equations with intrinsic maximum momentum
Ching, Chee Leong; Ng, Wei Khim
2014-05-01
We examine the nonperturbative effect of maximum momentum on the relativistic wave equations. In momentum representation, we obtain the exact eigen-energies and wave functions of one-dimensional Klein-Gordon and Dirac equation with linear confining potentials, and the Dirac oscillator. Bound state solutions are only possible when the strength of scalar potential is stronger than vector potential. The energy spectrum of the systems studied is bounded from above, whereby classical characteristics are observed in the uncertainties of position and momentum operators. Also, there is a truncation in the maximum number of bound states that is allowed. Some of these quantum-gravitational features may have future applications.
A maximum in the strength of nanocrystalline copper
Schiøtz, Jakob; Jacobsen, Karsten Wedel
2003-01-01
We used molecular dynamics simulations with system sizes up to 100 million atoms to simulate plastic deformation of nanocrystalline copper. By varying the grain size between 5 and 50 nanometers, we show that the flow stress and thus the strength exhibit a maximum at a grain size of 10 to 15...... nanometers. This maximum is because of a shift in the microscopic deformation mechanism from dislocation-mediated plasticity in the coarse-grained material to grain boundary sliding in the nanocrystalline region. The simulations allow us to observe the mechanisms behind the grain-size dependence...
GROWTH ANALYSIS AND ASSESSMENT OF PIG’S BIOLOGICAL MAXIMUM
Dragutin Vincek
2010-06-01
Full Text Available The aim of this study was to determine a mathematical model which can be used to describe the growth of domestic animals in an attempt to predict the optimal time of slaughter/weight or the development of body parts or tissues and estimate the biological maximum. The study was conducted on 60 pigs (30 barrows and 30 gilts in the interval between the age of 49 and 215 days. By applying the generalized logistic function, the growth of live weight and tissues were described. The observed gilts reached the inflection point in approximately 121 days (I = 70.7 kg. The point at which the interval of intensive growth starts was at the age of approximately 42 days, (TB=17.35 kg and the saturation point the pigs reached at the age of 200.5 days (TC=126.74 kg. The estimated biological maximum weight of gilts was 179.79 kg. The barrows reached the inflection point in approximately 149 days (I=92.2 kg. The point at which the intensive interval of growth starts was estimated at the age of approximately 52 days (TB=22.93 kg, and the saturation point the barrows reached at the age of 245 days (TC=164.8 kg. The estimated biological maximum weight of barrows was 233.25 kg. Muscle tissue of gilts reached the inflection point (I = 28.46 kg in approximately 110 days. The point at which the interval of intensive growth of muscle tissue starts (TB=6.06 kg was estimated at approximately 53 days, and the saturation point of growth (TC=52.25 kg the muscle tissue of gilts reached at the age of 162 days. The estimated maximum biological growth of muscle tissue in gilts was 75.79 kg. The muscle tissue of barrows reached the inflection point (I=28.78 kg in approximately 118 days, the point at which the interval of intensive growth starts (TB=6.36 kg at the age of approximately 35 days. The saturation point of muscle tissue growth in barrows (TC=52.51 kg was reached at the age of 202 days. The estimated maximum biological growth of muscle tissue in barrows was 75.74 kg. The
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
McNamara, John M; Swalm, Ricky L; Stearne, David J; Covassin, Tracey M
2008-07-01
The purpose of this study was to determine how a traditional weight training class compared to nontraditional classes that were heavily laden with technology. Could students learn resistance exercises by watching video demonstrations over the Internet? Three university weight training classes, each lasting 16 weeks, were compared. Each class had the same curriculum and workout requirements but different attendance requirements. The online group made extensive use of the Internet and was allowed to complete the workouts on their own at any gym that was convenient for them. Seventy-nine college-aged students were randomized into 3 groups: traditional (n = 27), hybrid (n = 25), and online (n = 27). They completed pretest and posttest measures on upper-body strength (i.e., bench press), lower-body strength (i.e., back squat), and knowledge (i.e., written exam). The results indicated that all 3 groups showed significant improvement in knowledge (p students to attend class and may have resulted in significantly lower scores on the bench press (p motivation, low accountability, and the possibility that the self-reported workouts were not accurate. These results suggest that there is a limit to how much technology can be used in a weight training class. If this limit is exceeded, some type of monitoring system appears necessary to ensure that students are actually completing their workouts.
Allowable levels of take for the trade in Nearctic songbirds
Johnson, Fred A.; Walters, Matthew A.H.; Boomer, G. Scott
2012-01-01
The take of Nearctic songbirds for the caged-bird trade is an important cultural and economic activity in Mexico, but its sustainability has been questioned. We relied on the theta-logistic population model to explore options for setting allowable levels of take for 11 species of passerines that were subject to legal take in Mexico in 2010. Because estimates of population size necessary for making periodic adjustments to levels of take are not routinely available, we examined the conditions under which a constant level of take might contribute to population depletion (i.e., a population below its level of maximum net productivity). The chance of depleting a population is highest when levels of take are based on population sizes that happen to be much lower or higher than the level of maximum net productivity, when environmental variation is relatively high and serially correlated, and when the interval between estimation of population size is relatively long (≥5 years). To estimate demographic rates of songbirds involved in the Mexican trade we relied on published information and allometric relationships to develop probability distributions for key rates, and then sampled from those distributions to characterize the uncertainty in potential levels of take. Estimates of the intrinsic rate of growth (r) were highly variable, but median estimates were consistent with those expected for relatively short-lived, highly fecund species. Allowing for the possibility of nonlinear density dependence generally resulted in allowable levels of take that were lower than would have been the case under an assumption of linearity. Levels of take authorized by the Mexican government in 2010 for the 11 species we examined were small in comparison to relatively conservative allowable levels of take (i.e., those intended to achieve 50% of maximum sustainable yield). However, the actual levels of take in Mexico are unknown and almost certainly exceed the authorized take. Also, the take
45 CFR 34.4 - Allowable claims.
2010-10-01
... government-owned or operated parking lot or garage incident to employment. This subsection does not include... amount allowed is the value of the vehicle at the time of loss as determined by the National Automobile.... Damage or loss of personal property, including baggage and household items, while being transported by...
45 CFR 2543.27 - Allowable costs.
2010-10-01
... Welfare Regulations Relating to Public Welfare (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT... Organizations.” The allowability of costs incurred by institutions of higher education is determined in...
34 CFR 80.22 - Allowable costs.
2010-07-01
... CFR part 31. Contract Cost Principles and Procedures, or uniform cost accounting standards that comply... COOPERATIVE AGREEMENTS TO STATE AND LOCAL GOVERNMENTS Post-Award Requirements Financial Administration § 80.22... kind of organization, there is a set of Federal principles for determining allowable costs. For...
13 CFR 143.22 - Allowable costs.
2010-01-01
... to that circular 48 CFR part 31. Contract Cost Principles and Procedures, or uniform cost accounting... Financial Administration § 143.22 Allowable costs. (a) Limitation on use of funds. Grant funds may be used... grantee or subgrantee. (b) Applicable cost principles. For each kind of organization, there is a set...
38 CFR 43.22 - Allowable costs.
2010-07-01
... accounting standards that comply with cost principles acceptable to the Federal agency. ... Requirements Financial Administration § 43.22 Allowable costs. (a) Limitation on use of funds. Grant funds may... the grantee or subgrantee. (b) Applicable cost principles. For each kind of organization, there is...
22 CFR 135.22 - Allowable costs.
2010-04-01
... Procedures, or uniform cost accounting standards that comply with cost principles acceptable to the Federal... AGREEMENTS TO STATE AND LOCAL GOVERNMENTS Post-Award Requirements Financial Administration § 135.22 Allowable... principles. For each kind of organization, there is a set of Federal principles for determining...
40 CFR 31.22 - Allowable costs.
2010-07-01
... accounting standards that comply with cost principles acceptable to the Federal agency. ... Requirements Financial Administration § 31.22 Allowable costs. (a) Limitation on use of funds. Grant funds may... the grantee or sub-grantee. (b) Applicable cost principles. For each kind of organization, there is...
45 CFR 92.22 - Allowable costs.
2010-10-01
... to that circular 48 CFR Part 31. Contract Cost Principles and Procedures, or uniform cost accounting... Financial Administration § 92.22 Allowable costs. (a) Limitation on use of funds. Grant funds may be used... grantee or subgrantee. (b) Applicable cost principles. For each kind of organization, there is a set...
7 CFR 550.25 - Allowable costs.
2010-01-01
... Regulations of the Department of Agriculture (Continued) AGRICULTURAL RESEARCH SERVICE, DEPARTMENT OF... Financial Management § 550.25 Allowable costs. For each kind of Cooperator, there is a set of Federal... Acquisition Regulation (FAR) at 48 CFR part 31. Program Management ...
22 CFR 145.27 - Allowable costs.
2010-04-01
... Relations DEPARTMENT OF STATE CIVIL RIGHTS GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial and Program Management § 145...-Profit Organizations.” The allowability of costs incurred by institutions of higher education...
22 CFR 518.27 - Allowable costs.
2010-04-01
... INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial and Program Management § 518.27 Allowable costs. For each kind of recipient, there is a set of... by institutions of higher education is determined in accordance with the provisions of OMB Circular...
36 CFR 1210.27 - Allowable costs.
2010-07-01
... RULES UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Financial and Program Management § 1210.27 Allowable costs. For each kind of recipient, there is a set of Federal principles...
Making It Personal: Per Capita Carbon Allowances
Fawcett, Tina; Hvelplund, Frede; Meyer, Niels I
2009-01-01
The Chapter highligts the importance of introducing new, efficient schemes for mitigation of global warming. One such scheme is Personal Carbon Allowances (PCA), whereby individuals are allotted a tradable ration of CO2 emission per year.This chapter reviews the fundamentals of PCA and analyzes its...
Judicial Deference Allows European Consensus to Emerge
Dothan, Shai
2017-01-01
conceived as competing doctrines: the more there is of one, the less there is of another. This paper suggests a novel rationale for the emerging consensus doctrine: the doctrine can allow the ECHR to make good policies by drawing on the independent decision-making of many similar countries. In light of that...
77 FR 34218 - Clothing Allowance; Correction
2012-06-11
... construed to impose a restriction that VA did not intend. This document corrects that error. DATES: This... Service, Veterans Benefits Administration, Department of Veterans Affairs, 810 Vermont Avenue NW... medication would be eligible for a clothing allowance for each such appliance or medication if each...
49 CFR 266.11 - Allowable costs.
2010-10-01
... Management Circular 74-4; and costs of projects eligible under § 266.7 of this part. All allowable costs shall be authorized by a fully executed grant agreement. A State may incur costs prior to the execution... need to incur costs prior to the execution of a grant agreement, has authorized the costs in writing...
33 CFR 136.211 - Compensation allowable.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Compensation allowable. 136.211 Section 136.211 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... PROCEDURES; DESIGNATION OF SOURCE; AND ADVERTISEMENT Procedures for Particular Claims § 136.211...
33 CFR 136.205 - Compensation allowable.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Compensation allowable. 136.205 Section 136.205 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... PROCEDURES; DESIGNATION OF SOURCE; AND ADVERTISEMENT Procedures for Particular Claims § 136.205...
33 CFR 136.241 - Compensation allowable.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Compensation allowable. 136.241 Section 136.241 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... PROCEDURES; DESIGNATION OF SOURCE; AND ADVERTISEMENT Procedures for Particular Claims § 136.241...
33 CFR 136.223 - Compensation allowable.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Compensation allowable. 136.223 Section 136.223 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... PROCEDURES; DESIGNATION OF SOURCE; AND ADVERTISEMENT Procedures for Particular Claims § 136.223...
33 CFR 136.217 - Compensation allowable.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Compensation allowable. 136.217 Section 136.217 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... PROCEDURES; DESIGNATION OF SOURCE; AND ADVERTISEMENT Procedures for Particular Claims § 136.217...
33 CFR 136.235 - Compensation allowable.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Compensation allowable. 136.235 Section 136.235 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... PROCEDURES; DESIGNATION OF SOURCE; AND ADVERTISEMENT Procedures for Particular Claims § 136.235...
33 CFR 136.229 - Compensation allowable.
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Compensation allowable. 136.229 Section 136.229 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... PROCEDURES; DESIGNATION OF SOURCE; AND ADVERTISEMENT Procedures for Particular Claims § 136.229...
50 CFR 80.15 - Allowable costs.
2010-10-01
...) FINANCIAL ASSISTANCE-WILDLIFE SPORT FISH RESTORATION PROGRAM ADMINISTRATIVE REQUIREMENTS, PITTMAN-ROBERTSON WILDLIFE RESTORATION AND DINGELL-JOHNSON SPORT FISH RESTORATION ACTS § 80.15 Allowable costs. (a) What are... designed to include purposes other than those eligible under either the Dingell-Johnson Sport Fish...
43 CFR 12.62 - Allowable costs.
2010-10-01
... Public Lands: Interior Office of the Secretary of the Interior ADMINISTRATIVE AND AUDIT REQUIREMENTS AND COST PRINCIPLES FOR ASSISTANCE PROGRAMS Uniform Administrative Requirements for Grants and Cooperative... increment above allowable costs) to the grantee or subgrantee. (b) Applicable cost principles. For each...
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Weighted constraints in generative linguistics.
Pater, Joe
2009-08-01
Harmonic Grammar (HG) and Optimality Theory (OT) are closely related formal frameworks for the study of language. In both, the structure of a given language is determined by the relative strengths of a set of constraints. They differ in how these strengths are represented: as numerical weights (HG) or as ranks (OT). Weighted constraints have advantages for the construction of accounts of language learning and other cognitive processes, partly because they allow for the adaptation of connectionist and statistical models. HG has been little studied in generative linguistics, however, largely due to influential claims that weighted constraints make incorrect predictions about the typology of natural languages, predictions that are not shared by the more popular OT. This paper makes the case that HG is in fact a promising framework for typological research, and reviews and extends the existing arguments for weighted over ranked constraints.
ON A GENERALIZATION OF THE MAXIMUM ENTROPY THEOREM OF BURG
JOSÉ MARCANO
2017-01-01
Full Text Available In this article we introduce some matrix manipulations that allow us to obtain a version of the original Christoffel-Darboux formula, which is of interest in many applications of linear algebra. Using these developments matrix and Jensen’s inequality, we obtain the main result of this proposal, which is the generalization of the maximum entropy theorem of Burg for multivariate processes.
Realization of allowable qeneralized quantum gates
无
2010-01-01
The most general duality gates were introduced by Long,Liu and Wang and named allowable generalized quantum gates (AGQGs,for short).By definition,an allowable generalized quantum gate has the form of U=YfkjsckUK,where Uk’s are unitary operators on a Hilbert space H and the coefficients ck’s are complex numbers with |Yfijo ck\\ ∧ 1 an d 1ck| <1 for all k=0,1,...,d-1.In this paper,we prove that an AGQG U=YfkZo ck∧k is realizable,i.e.there are two d by d unitary matrices W and V such that ck=W0kVk0 (0
Making It Personal: Per Capita Carbon Allowances
Fawcett, Tina; Hvelplund, Frede; Meyer, Niels I
2009-01-01
The Chapter highligts the importance of introducing new, efficient schemes for mitigation of global warming. One such scheme is Personal Carbon Allowances (PCA), whereby individuals are allotted a tradable ration of CO2 emission per year.This chapter reviews the fundamentals of PCA and analyzes its...... merits and problems. The United Kingdom and Denmark have been chosen as case studies because the energy situation and the institutional setup are quite different between the two countries....
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
... weight) weight loss. As in the treatment with hyperthyroidism, treatment of the abnormal state of hypothyroidism with thyroid ... Goiter Graves’ Disease Graves’ Eye Disease Hashimoto’s Thyroiditis Hyperthyroidism ... & Weight Thyroiditis Thyroid ...
Weight loss surgery helps people with extreme obesity to lose weight. It may be an option if you ... caused by obesity. There are different types of weight loss surgery. They often limit the amount of food ...
Irani Lauer Lellis
2012-06-01
Full Text Available The practice of giving allowance is used by several parents in different parts of the world and can contribute to the economic education of children. This study aimed to investigate the purposes of the allowance with 32 parents of varying incomes. We used the focus group technique and Alceste software to analyze the data. The results involved two classes related to the process of using the allowance. These classes have covered aspects of the role of socialization and education allowance, serving as an instrument of reward, but sometimes encouraging bad habits in children. The justification of the fathers concerning the amount of money to be given to the children and when to stop giving allowance were also highlighted. Keywords: allowance; economic socialization; parenting practices.
Dynamical Systems On Weighted Lattices: General Theory
Maragos, Petros
2016-01-01
In this work a theory is developed for unifying large classes of nonlinear discrete-time dy- namical systems obeying a superposition of a weighted maximum or minimum type. The state vectors and input-output signals evolve on nonlinear spaces which we call complete weighted lat- tices and include as special cases the nonlinear vector spaces of minimax algebra. Their algebraic structure has a polygonal geometry. Some of the special cases unified include max-plus, max- product, and probabilistic...
Modal Transition Systems with Weight Intervals
Juhl, Line; Larsen, Kim Guldstrand; Srba, Jiri
2012-01-01
We propose weighted modal transition systems, an extension to the well-studied specification formalism of modal transition systems that allows to express both required and optional behaviours of their intended implementations. In our extension we decorate each transition with a weight interval th...
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
The Effect of Sunspot Weighting
Svalgaard, Leif; Cortesi, Sergio
2015-01-01
Waldmeier in 1947 introduced a weighting (on a scale from 1 to 5) of the sunspot count made at Zurich and its auxiliary station Locarno, whereby larger spots were counted more than once. This counting method inflates the relative sunspot number over that which corresponds to the scale set by Wolfer and Brunner. Svalgaard re-counted some 60,000 sunspots on drawings from the reference station Locarno and determined that the number of sunspots reported were 'over counted' by 44% on average, leading to an inflation (measured by a weight factor) in excess of 1.2 for high solar activity. In a double-blind parallel counting by the Locarno observer Cagnotti, we determined that Svalgaard's count closely matches that of Cagnotti's, allowing us to determine the daily weight factor since 2003 (and sporadically before). We find that a simple empirical equation fits the observed weight factors well, and use that fit to estimate the weight factor for each month back to the introduction of weighting in 1947 and thus to be ab...
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
2011-03-24
... http://www.gsa.gov/relocationpolicy . Dated: March 21, 2011. Janet Dobbs, Director, Office of Travel... ADMINISTRATION Federal Travel Regulation (FTR); Relocation Allowances-- Relocation Income Tax Allowance (RITA... effective March 24, 2011. FOR FURTHER INFORMATION CONTACT: Mr. Ed Davis, Office of Governmentwide Policy...
2011-06-06
... Duty (TDY) Travel Allowances (Taxes); Relocation Allowances (Taxes) AGENCY: Office of Governmentwide... extended temporary duty (TDY) benefits to correct errors and to align that process with the proposed... incurred by employees as a result of relocation and to reimburse ``all'' of the taxes imposed on any...
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Weight loss, weight regain and bone health.
Pines, Amos
2012-08-01
The ideal body image for women these days is being slim but, in the real world, obesity becomes a major health problem even in the developing countries. Overweight, but also underweight, may have associated adverse outcomes in many bodily systems, including the bone. Only a few studies have investigated the consequences of intentional weight loss, then weight regain, on bone metabolism and bone density. It seems that the negative impact of bone loss is not reversed when weight partially rebounds following the end of active intervention programs. Thus the benefits and risks of any weight loss program should be addressed individually, and monitoring of bone parameters is recommended.
Training Concept, Evolution Time, and the Maximum Entropy Production Principle
Alexey Bezryadin
2016-04-01
Full Text Available The maximum entropy production principle (MEPP is a type of entropy optimization which demands that complex non-equilibrium systems should organize such that the rate of the entropy production is maximized. Our take on this principle is that to prove or disprove the validity of the MEPP and to test the scope of its applicability, it is necessary to conduct experiments in which the entropy produced per unit time is measured with a high precision. Thus we study electric-field-induced self-assembly in suspensions of carbon nanotubes and realize precise measurements of the entropy production rate (EPR. As a strong voltage is applied the suspended nanotubes merge together into a conducting cloud which produces Joule heat and, correspondingly, produces entropy. We introduce two types of EPR, which have qualitatively different significance: global EPR (g-EPR and the entropy production rate of the dissipative cloud itself (DC-EPR. The following results are obtained: (1 As the system reaches the maximum of the DC-EPR, it becomes stable because the applied voltage acts as a stabilizing thermodynamic potential; (2 We discover metastable states characterized by high, near-maximum values of the DC-EPR. Under certain conditions, such efficient entropy-producing regimes can only be achieved if the system is allowed to initially evolve under mildly non-equilibrium conditions, namely at a reduced voltage; (3 Without such a “training” period the system typically is not able to reach the allowed maximum of the DC-EPR if the bias is high; (4 We observe that the DC-EPR maximum is achieved within a time, Te, the evolution time, which scales as a power-law function of the applied voltage; (5 Finally, we present a clear example in which the g-EPR theoretical maximum can never be achieved. Yet, under a wide range of conditions, the system can self-organize and achieve a dissipative regime in which the DC-EPR equals its theoretical maximum.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Fact Sheet Proven Weight Loss Methods What can weight loss do for you? Losing weight can improve your health in a number of ways. ... limiting calories) usually isn’t enough to cause weight loss. But exercise plays an important part in helping ...
Blasques, José Pedro Albergaria Amaral; Stolpe, Mathias
2011-01-01
and cross section geometry. The resulting finite element matrices are significantly smaller than those obtained using equivalent finite element models. This modeling approach is therefore an attractive alternative in computationally intensive applications at the conceptual design stage where the focus...
Calculations of Maximum A-Weighted Sound Levels (dBA) Resulting from Civil Aircraft Operations.
1978-06-01
Department of Transportation ___________________________ Federal Aviation Administration i i . Co ntr ac t or Grant No. Office of Environmental Quality ...RA’i’ t\\ i’ Ni.” tSF~ iN INI \\ ’t ’ R ~\\Nt~ ‘t~rt k\\’R 0’~NVi I ’NMF N’l’:~ Sound Lavals and Loudn ss of IIIusl rat~ve Noks•s ~n Indoo r and Outdoor...impact ot increasitig sound l e v e l . ; on speech. i’his tab le provides outdoor i n t e r f e r e n c e love Is. indoo r tnt cr1 or et i c o levt
Mind the edge! The role of adjacency matrix degeneration in maximum entropy weighted network models
Sagarra, Oleguer; Díaz-Guilera, Albert
2015-01-01
Complex network null models based on entropy maximization are becoming a powerful tool to characterize and analyze data from real systems. However, it is not easy to extract good and unbiased information from these models: A proper understanding of the nature of the underlying events represented in them is crucial. In this paper we emphasize this fact stressing how an accurate counting of configurations compatible with given constraints is fundamental to build good null models for the case of networks with integer valued adjacency matrices constructed from aggregation of one or multiple layers. We show how different assumptions about the elements from which the networks are built give rise to distinctively different statistics, even when considering the same observables to match those of real data. We illustrate our findings by applying the formalism to three datasets using an open-source software package accompanying the present work and demonstrate how such differences are clearly seen when measuring networ...
Minimizing Maximum Response Time and Delay Factor in Broadcast Scheduling
Chekuri, Chandra; Moseley, Benjamin
2009-01-01
We consider online algorithms for pull-based broadcast scheduling. In this setting there are n pages of information at a server and requests for pages arrive online. When the server serves (broadcasts) a page p, all outstanding requests for that page are satisfied. We study two related metrics, namely maximum response time (waiting time) and maximum delay-factor and their weighted versions. We obtain the following results in the worst-case online competitive model. - We show that FIFO (first-in first-out) is 2-competitive even when the page sizes are different. Previously this was known only for unit-sized pages [10] via a delicate argument. Our proof differs from [10] and is perhaps more intuitive. - We give an online algorithm for maximum delay-factor that is O(1/eps^2)-competitive with (1+\\eps)-speed for unit-sized pages and with (2+\\eps)-speed for different sized pages. This improves on the algorithm in [12] which required (2+\\eps)-speed and (4+\\eps)-speed respectively. In addition we show that the algori...
Influence of Pareto optimality on the maximum entropy methods
Peddavarapu, Sreehari; Sunil, Gujjalapudi Venkata Sai; Raghuraman, S.
2017-07-01
Galerkin meshfree schemes are emerging as a viable substitute to finite element method to solve partial differential equations for the large deformations as well as crack propagation problems. However, the introduction of Shanon-Jayne's entropy principle in to the scattered data approximation has deviated from the trend of defining the approximation functions, resulting in maximum entropy approximants. Further in addition to this, an objective functional which controls the degree of locality resulted in Local maximum entropy approximants. These are based on information-theoretical Pareto optimality between entropy and degree of locality that are defining the basis functions to the scattered nodes. The degree of locality in turn relies on the choice of locality parameter and prior (weight) function. The proper choices of both plays vital role in attain the desired accuracy. Present work is focused on the choice of locality parameter which defines the degree of locality and priors: Gaussian, Cubic spline and quartic spline functions on the behavior of local maximum entropy approximants.
Peer effects, fast food consumption and adolescent weight gain.
Fortin, Bernard; Yazbeck, Myra
2015-07-01
This paper aims at opening the black box of peer effects in adolescent weight gain. Using Add Health data on secondary schools in the U.S., we investigate whether these effects partly flow through the eating habits channel. Adolescents are assumed to interact through a friendship social network. We propose a two-equation model. The first equation provides a social interaction model of fast food consumption. To estimate this equation we use a quasi maximum likelihood approach that allows us to control for common environment at the network level and to solve the simultaneity (reflection) problem. Our second equation is a panel dynamic weight production function relating an individual's Body Mass Index z-score (zBMI) to his fast food consumption and his lagged zBMI, and allowing for irregular intervals in the data. Results show that there are positive but small peer effects in fast food consumption among adolescents belonging to a same friendship school network. Based on our preferred specification, the estimated social multiplier is 1.15. Our results also suggest that, in the long run, an extra day of weekly fast food restaurant visits increases zBMI by 4.45% when ignoring peer effects and by 5.11%, when they are taken into account.
Determinants of weight regain after bariatric surgery.
Bastos, Emanuelle Cristina Lins; Barbosa, Emília Maria Wanderley Gusmão; Soriano, Graziele Moreira Silva; dos Santos, Ewerton Amorim; Vasconcelos, Sandra Mary Lima
2013-01-01
Bariatric surgery leads to an average loss of 60-75% of excess body weight with maximum weight loss in the period between 18 and 24 months postoperatively. However, several studies show that weight is regained from two years of operation. To identify the determinants of weight regain in post-bariatric surgery users. Prospective cross-sectional study with 64 patients who underwent bariatric surgery with postoperative time > 2 years valued at significant weight regain. The variables analyzed were age, sex, education, socioeconomic status, work activity related to food, time after surgery, BMI, percentage of excess weight loss, weight gain, attendance monitoring nutrition, lifestyle, eating habits, self-perception of appetite, daily use of nutritional supplements and quality of life. There were 57 (89%) women and 7 (11%) men, aged 41.76 ± 7.93 years and mean postoperative period of 53.4 ± 18.4 months. The average weight and BMI were respectively 127.48 ± 24.2 kg and 49.56 ± 6.7 kg/m2 at surgery. The minimum weight and BMI were achieved 73.0 ± 18.6 kg and 28.3 ± 5.5 kg/m2, reached in 23.7 ± 12 months postoperatively. Regained significant weight occurred in 18 (28.1%) cases. The mean postoperative period of 66 ± 8.3 months and work activities related to food showed statistical significance (p=000 and p=0.003) for the regained weight. Bariatric surgery promotes adequate reduction of excess body weight, with significant weight regain observed after five years; post-operative time and work activity related to eating out as determining factors for the occurrence of weight regain.
Maximum-likelihood fits to histograms for improved parameter estimation
Fowler, Joseph W
2013-01-01
Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.
Maximum work extraction and implementation costs for nonequilibrium Maxwell's demons
Sandberg, Henrik; Delvenne, Jean-Charles; Newton, Nigel J.; Mitter, Sanjoy K.
2014-10-01
We determine the maximum amount of work extractable in finite time by a demon performing continuous measurements on a quadratic Hamiltonian system subjected to thermal fluctuations, in terms of the information extracted from the system. The maximum work demon is found to apply a high-gain continuous feedback involving a Kalman-Bucy estimate of the system state and operates in nonequilibrium. A simple and concrete electrical implementation of the feedback protocol is proposed, which allows for analytic expressions of the flows of energy, entropy, and information inside the demon. This let us show that any implementation of the demon must necessarily include an external power source, which we prove both from classical thermodynamics arguments and from a version of Landauer's memory erasure argument extended to nonequilibrium linear systems.
Maximum disturbance review criteria : operational code and guideline
NONE
2003-07-01
This maximum disturbance review criteria (MDRC) is designed to encourage oil and gas construction contractors to reduce environmental impacts and consider land use and water management strategies in their development plans. The MDRC describes the preferred maximum disturbance allowances for the development of wellsites, access routes, right of way for pipelines and other associated facilities such as remote sumps, decking sites, camp sites and borrow pits. The guidelines specify acceptable parameters for typical oil and gas development activities. This report includes operating code tables which describe clearings and setbacks, access roads, watercourses, and photography and assessment reports for oil and gas activity. Additional care is required if special wildlife habitat features are encountered such as nesting sites, mineral licks, bear dens or beaver ponds. 4 tabs.
Night vision image fusion for target detection with improved 2D maximum entropy segmentation
Bai, Lian-fa; Liu, Ying-bin; Yue, Jiang; Zhang, Yi
2013-08-01
Infrared and LLL image are used for night vision target detection. In allusion to the characteristics of night vision imaging and lack of traditional detection algorithm for segmentation and extraction of targets, we propose a method of infrared and LLL image fusion for target detection with improved 2D maximum entropy segmentation. Firstly, two-dimensional histogram was improved by gray level and maximum gray level in weighted area, weights were selected to calculate the maximum entropy for infrared and LLL image segmentation by using the histogram. Compared with the traditional maximum entropy segmentation, the algorithm had significant effect in target detection, and the functions of background suppression and target extraction. And then, the validity of multi-dimensional characteristics AND operation on the infrared and LLL image feature level fusion for target detection is verified. Experimental results show that detection algorithm has a relatively good effect and application in target detection and multiple targets detection in complex background.
Optimal weighted nearest neighbour classifiers
Samworth, Richard J
2011-01-01
We derive an asymptotic expansion for the excess risk (regret) of a weighted nearest-neighbour classifier. This allows us to find the asymptotically optimal vector of non-negative weights, which has a rather simple form. We show that the ratio of the regret of this classifier to that of an unweighted $k$-nearest neighbour classifier depends asymptotically only on the dimension $d$ of the feature vectors, and not on the underlying population densities. The improvement is greatest when $d=4$, but thereafter decreases as $d \\rightarrow \\infty$. The popular bagged nearest neighbour classifier can also be regarded as a weighted nearest neighbour classifier, and we show that its corresponding weights are somewhat suboptimal when $d$ is small (in particular, worse than those of the unweighted $k$-nearest neighbour classifier when $d=1$), but are close to optimal when $d$ is large. Finally, we argue that improvements in the rate of convergence are possible under stronger smoothness assumptions, provided we allow nega...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Management of Truck Loading Weight: A Critical Review of the Literature and Recommended Remedies
Aliakbari Mozhgan
2016-01-01
Full Text Available Traffic accidents involving heavy trucks have social and economic effects on society. However, little research has focused on the influence of heavy truck specifications such as weight. Apportioning the maximum permissible gross weight of trucks allows trucking companies/owners to consolidate loads, and therefore reduce the vehicle-kilometres required to collect and distribute a given amount of goods/material. While drivers/managers are responsible for ensuring that trucks are loaded appropriately and in compliance with regulations, some may take chances and overload vehicles. This increases the need for formal and documented inspections, in order to reduce traffic hazards on public roads due to overweight loading. According to a New South Wales Centre for Road Safety report in 2014, crashes involving heavy trucks often result in serious road trauma outcomes. When a heavy truck is involved in a crash, the vehicle mass raises the crash forces involved and hence increases the severity of the crash. Therefore, interventions should be established to mitigate or prevent these crashes from occurring. Currently, weight checks are required for trucks and truck drivers must drive to a weighbridge for a weight check. Since this is a random process, truck drivers may take the risk of driving an over-loaded truck on some occasions. This paper reviews existing studies concerning safe system interventions in relation to truck gross weight management and a framework is presented to effectively manage truck loading weight. The result may be a reduction of injuries and fatalities involving heavy trucks.
A Maximum Likelihood Approach to Least Absolute Deviation Regression
Yinbo Li
2004-09-01
Full Text Available Least absolute deviation (LAD regression is an important tool used in numerous applications throughout science and engineering, mainly due to the intrinsic robust characteristics of LAD. In this paper, we show that the optimization needed to solve the LAD regression problem can be viewed as a sequence of maximum likelihood estimates (MLE of location. The derived algorithm reduces to an iterative procedure where a simple coordinate transformation is applied during each iteration to direct the optimization procedure along edge lines of the cost surface, followed by an MLE of location which is executed by a weighted median operation. Requiring weighted medians only, the new algorithm can be easily modularized for hardware implementation, as opposed to most of the other existing LAD methods which require complicated operations such as matrix entry manipulations. One exception is Wesolowsky's direct descent algorithm, which among the top algorithms is also based on weighted median operations. Simulation shows that the new algorithm is superior in speed to Wesolowsky's algorithm, which is simple in structure as well. The new algorithm provides a better tradeoff solution between convergence speed and implementation complexity.
On $2k$-Variable Symmetric Boolean Functions with Maximum Algebraic Immunity $k$
Wang, Hui; Li, Yuan; Kan, Haibin
2011-01-01
Given a positive even integer $n$, it is found that the weight distribution of any $n$-variable symmetric Boolean function with maximum algebraic immunity $\\frac{n}{2}$ is determined by the binary expansion of $n$. Based on that, all $n$-variable symmetric Boolean functions with maximum algebraic immunity are constructed. The amount is $(2\\wt(n)+1)2^{\\lfloor \\log_2 n \\rfloor}$.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Anderson, L. S.; Wickert, A. D.; Colgan, W. T.; Anderson, R. S.
2014-12-01
The Last Glacial Maximum (LGM) Yellowstone Ice Cap was the largest continuous ice body in the US Rocky Mountains. Terminal moraine ages derived from cosmogenic radionuclide dating (e.g., Licciardi and Pierce, 2008) constrain the timing of maximum Ice Cap extent. Importantly, the moraine ages vary by several thousand years around the Ice Cap; ages on the eastern outlet glaciers are significantly younger than their western counterparts. In order to interpret these observations within the context of LGM climate in North America, we perform two numerical glacier modeling experiments: 1) We model the initiation and growth of the Ice Cap to steady state; and 2) We estimate the range of LGM climate states which led to the formation of the Ice Cap. We use an efficient semi-implicit 2-D glacier model coupled to a fully implicit solution for flexural isostasy, allowing for transient links between climatic forcing, ice thickness, and earth surface deflection. Independent of parameter selection, the Ice Cap initiates in the Absaroka and Beartooth mountains and then advances across the Yellowstone plateau to the west. The Ice Cap advances to its maximum extent first to the older eastern moraines and last to the younger western and northwestern moraines. This suggests that the moraine ages may reflect the timescale required for the Ice Cap to advance across the high elevation Yellowstone plateau rather than the timing of local LGM climate. With no change in annual precipitation from the present, a mean summer temperature drop of 8-9° C is required to form the Ice Cap. Further parameter searches provide the full range of LGM paleoclimate states that led to the Yellowstone Ice Cap. Using our preferred parameter set, we find that the timescale for the growth of the complete Ice Cap is roughly 10,000 years. Isostatic subsidence helps explain the long timescale of Ice Cap growth. The Yellowstone Ice Cap caused a maximum surface deflection of 300 m (using a constant effective elastic
Payoff-monotonic game dynamics and the maximum clique problem.
Pelillo, Marcello; Torsello, Andrea
2006-05-01
Evolutionary game-theoretic models and, in particular, the so-called replicator equations have recently proven to be remarkably effective at approximately solving the maximum clique and related problems. The approach is centered around a classic result from graph theory that formulates the maximum clique problem as a standard (continuous) quadratic program and exploits the dynamical properties of these models, which, under a certain symmetry assumption, possess a Lyapunov function. In this letter, we generalize previous work along these lines in several respects. We introduce a wide family of game-dynamic equations known as payoff-monotonic dynamics, of which replicator dynamics are a special instance, and show that they enjoy precisely the same dynamical properties as standard replicator equations. These properties make any member of this family a potential heuristic for solving standard quadratic programs and, in particular, the maximum clique problem. Extensive simulations, performed on random as well as DIMACS benchmark graphs, show that this class contains dynamics that are considerably faster than and at least as accurate as replicator equations. One problem associated with these models, however, relates to their inability to escape from poor local solutions. To overcome this drawback, we focus on a particular subclass of payoff-monotonic dynamics used to model the evolution of behavior via imitation processes and study the stability of their equilibria when a regularization parameter is allowed to take on negative values. A detailed analysis of these properties suggests a whole class of annealed imitation heuristics for the maximum clique problem, which are based on the idea of varying the parameter during the imitation optimization process in a principled way, so as to avoid unwanted inefficient solutions. Experiments show that the proposed annealing procedure does help to avoid poor local optima by initially driving the dynamics toward promising regions in
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Maximum-entropy for the laser fusion problem
Madkour, M.A. [Nansoura Univ. (Egypt). Dept. of Phys.
1996-09-01
The problem of heat flux at the critical surfaces and the surfaces of a pellet of deuterium and tritium (conduction zone) heated by laser have been considered. Ion-electron collisions are only allowed for: i.e. the linear transport equation is used to describe the problem with boundary conditions. The maximum-entropy approach is used to calculate the electron density and temperature across the conduction zone as well as the heat flux. Numerical results are given and compared with those of Rouse and Williams and El-Wakil et al. (orig.).
Extracting volatility signal using maximum a posteriori estimation
Neto, David
2016-11-01
This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.
A Maximum Entropy Modelling of the Rain Drop Size Distribution
Francisco J. Tapiador
2011-01-01
Full Text Available This paper presents a maximum entropy approach to Rain Drop Size Distribution (RDSD modelling. It is shown that this approach allows (1 to use a physically consistent rationale to select a particular probability density function (pdf (2 to provide an alternative method for parameter estimation based on expectations of the population instead of sample moments and (3 to develop a progressive method of modelling by updating the pdf as new empirical information becomes available. The method is illustrated with both synthetic and real RDSD data, the latest coming from a laser disdrometer network specifically designed to measure the spatial variability of the RDSD.
Effect of salinity stress on plant fresh weight and nutrient ...
user
2011-03-07
Mar 7, 2011 ... Although Brassica species produce maximum yield under normal soil and ..... germination of seeds and growth of young plants of hordeum vulgare, ... weight and plant growth of soybean (Glycine max L. Merrill) cultivars.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Weight loss, weight maintenance, and adaptive thermogenesis.
Camps, Stefan G J A; Verhoef, Sanne P M; Westerterp, Klaas R
2013-05-01
Diet-induced weight loss is accompanied by adaptive thermogenesis, ie, a disproportional or greater than expected reduction of resting metabolic rate (RMR). The aim of this study was to investigate whether adaptive thermogenesis is sustained during weight maintenance after weight loss. Subjects were 22 men and 69 women [mean ± SD age: 40 ± 9 y; body mass index (BMI; in kg/m(2)): 31.9 ± 3.0]. They followed a very-low-energy diet for 8 wk, followed by a 44-wk period of weight maintenance. Body composition was assessed with a 3-compartment model based on body weight, total body water (deuterium dilution), and body volume. RMR was measured (RMRm) with a ventilated hood. In addition, RMR was predicted (RMRp) on the basis of the measured body composition: RMRp (MJ/d) = 0.024 × fat mass (kg) + 0.102 × fat-free mass (kg) + 0.85. Measurements took place before the diet and 8, 20, and 52 wk after the start of the diet. The ratio of RMRm to RMRp decreased from 1.004 ± 0.077 before the diet to 0.963 ± 0.073 after the diet (P after 20 wk (0.983 ± 0.063; P weight loss after 8 wk (P Weight loss results in adaptive thermogenesis, and there is no indication for a change in adaptive thermogenesis up to 1 y, when weight loss is maintained. This trial was registered at clinicaltrials.gov as NCT01015508.
Kim, K. M.; Smetana, P.
1990-03-01
Growth of large diameter Czochralski (CZ) silicon crystals require complete elimination of dislocations by means of Dash technique, where the seed diameter is reduced to a small size typically 3 mm in conjunction with increase in the pull rate. The maximum length of the large CZ silicon is estimated at the fracture stress limit of the seed neck diameter ( d). The maximum lengths for 200 and 300 mm CZ crystals amount to 197 and 87 cm, respectively, with d = 0.3 cm; the estimated maximum weight is 144 kg.
Allowable carbon emissions for medium-to-high mitigation scenarios
Tachiiri, Kaoru; Hargreaves, Julia C.; Annan, James D.; Kawamiya, Michio [Research Inst. for Global Change, Japan Agency for Marine-Earth Science and Technology, Yokohama, (Japan)], e-mail: tachiiri@jamstec.go.jp; Huntingford, Chris [Centre for Ecology and Hydrology, Wallingford (United Kingdom)
2013-11-15
source to the atmosphere, although uncertainties on this are large. The parameters which most significantly affect the allowable emissions are aerosols and climate sensitivity, but some carbon-cycle related parameters (e.g. maximum photosynthetic rate and respiration's temperature dependency of vegetation) also have significant effects. Parameter values are constrained by observation, and we found that the CO{sub 2} emission data had a significant effect in constraining climate sensitivity and the magnitude of aerosol radiative forcing.
Dietary protein, weight loss, and weight maintenance.
Westerterp-Plantenga, M S; Nieuwenhuizen, A; Tomé, D; Soenen, S; Westerterp, K R
2009-01-01
The role of dietary protein in weight loss and weight maintenance encompasses influences on crucial targets for body weight regulation, namely satiety, thermogenesis, energy efficiency, and body composition. Protein-induced satiety may be mainly due to oxidation of amino acids fed in excess, especially in diets with "incomplete" proteins. Protein-induced energy expenditure may be due to protein and urea synthesis and to gluconeogenesis; "complete" proteins having all essential amino acids show larger increases in energy expenditure than do lower-quality proteins. With respect to adverse effects, no protein-induced effects are observed on net bone balance or on calcium balance in young adults and elderly persons. Dietary protein even increases bone mineral mass and reduces incidence of osteoporotic fracture. During weight loss, nitrogen intake positively affects calcium balance and consequent preservation of bone mineral content. Sulphur-containing amino acids cause a blood pressure-raising effect by loss of nephron mass. Subjects with obesity, metabolic syndrome, and type 2 diabetes are particularly susceptible groups. This review provides an overview of how sustaining absolute protein intake affects metabolic targets for weight loss and weight maintenance during negative energy balance, i.e., sustaining satiety and energy expenditure and sparing fat-free mass, resulting in energy inefficiency. However, the long-term relationship between net protein synthesis and sparing fat-free mass remains to be elucidated.
... be due to menstruation, heart or kidney failure, preeclampsia, or medicines you take. A rapid weight gain ... al. Position of the American Dietetic Association: weight management. J Am Diet Assoc . 2009;109:330-46. ...
... this page: //medlineplus.gov/ency/patientinstructions/000346.htm Weight-loss medicines To use the sharing features on this page, please enable JavaScript. Several weight-loss medicines are available. Ask your health care provider ...
Healthy adults maximum oxygen uptake prediction from a six minute walking test
Nury Nusdwinuringtyas
2011-08-01
Full Text Available Background: A parameter is needed in medical activities or services to determine functional capacity. This study is aimed to produce functional capacity parameter for Indonesian adult as maximum O2.Methods: This study used 123 Indonesian healthy adult subjects (58 males and 65 females with a sedentary lifestyle, using a cross-sectional method.Results: Designed by using the followings: distance, body height, body weight, sex, age, maximum heart rate of six minute walking test and lung capacity (FEV and FVC, the study revealed a good correlation (except body weight with maximum O2. Three new formulas were proposed, which consisted of eight, six, and five variable respectively. Test of the new formula gave result of maximum O2 that is relevant to the golden standard maximum O2 using Cosmed® C-Pex.Conclusion: The Nury formula is the appropriate predictor of maximum oxygen uptake for healthy Indonesians adult as it is designed using Indonesian subjects (Mongoloid compared to the Cahalin’s formula (Caucasian. The Nury formula which consists of five variables is more applicable because it does not require any measurement tools neither specific competency. (Med J Indones 2011;20:195-200Keywords: maximum O2, Nury’s formula, six minute walking test
Weight management in pregnancy
Olander, E. K.
2015-01-01
Key learning points: - Women who start pregnancy in an overweight or obese weight category have increased health risks - Irrespective of pre-pregnancy weight category, there are health risks associated with gaining too much weight in pregnancy for both mother and baby - There are currently no official weight gain guidelines for pregnancy in the UK, thus focus needs to be on supporting pregnant women to eat healthily and keep active
Mroczka Janusz
2014-12-01
Full Text Available Photovoltaic panels have a non-linear current-voltage characteristics to produce the maximum power at only one point called the maximum power point. In the case of the uniform illumination a single solar panel shows only one maximum power, which is also the global maximum power point. In the case an irregularly illuminated photovoltaic panel many local maxima on the power-voltage curve can be observed and only one of them is the global maximum. The proposed algorithm detects whether a solar panel is in the uniform insolation conditions. Then an appropriate strategy of tracking the maximum power point is taken using a decision algorithm. The proposed method is simulated in the environment created by the authors, which allows to stimulate photovoltaic panels in real conditions of lighting, temperature and shading.
Leenen, L
2007-12-01
Full Text Available The authors present a variant of the Weighted Maximum Satisfiability Problem (Weighted Max-SAT), which is a modeling of the Semiring Constraint Satisfaction framework. They show how to encode a Semiring Constraint Satisfaction Problem (SCSP...
Maximum mass, moment of inertia and compactness of relativistic stars
Breu, Cosima
2016-01-01
A number of recent works have highlighted that it is possible to express the properties of general-relativistic stellar equilibrium configurations in terms of functions that do not depend on the specific equation of state employed to describe matter at nuclear densities. These functions are normally referred to as "universal relations" and have been found to apply, within limits, both to static or stationary isolated stars, as well as to fully dynamical and merging binary systems. Further extending the idea that universal relations can be valid also away from stability, we show that a universal relation is exhibited also by equilibrium solutions that are not stable. In particular, the mass of rotating configurations on the turning-point line shows a universal behaviour when expressed in terms of the normalised Keplerian angular momentum. In turn, this allows us to compute the maximum mass allowed by uniform rotation, M_{max}, simply in terms of the maximum mass of the nonrotating configuration, M_{TOV}, findi...
40 CFR 73.21 - Phase II repowering allowances.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Phase II repowering allowances. 73.21... (CONTINUED) SULFUR DIOXIDE ALLOWANCE SYSTEM Allowance Allocations § 73.21 Phase II repowering allowances. (a) Repowering allowances. In addition to allowances allocated under § 73.10(b), the Administrator will...
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Weighted principal component analysis: a weighted covariance eigendecomposition approach
Delchambre, Ludovic
2014-01-01
We present a new straightforward principal component analysis (PCA) method based on the diagonalization of the weighted variance-covariance matrix through two spectral decomposition methods: power iteration and Rayleigh quotient iteration. This method allows one to retrieve a given number of orthogonal principal components amongst the most meaningful ones for the case of problems with weighted and/or missing data. Principal coefficients are then retrieved by fitting principal components to the data while providing the final decomposition. Tests performed on real and simulated cases show that our method is optimal in the identification of the most significant patterns within data sets. We illustrate the usefulness of this method by assessing its quality on the extrapolation of Sloan Digital Sky Survey quasar spectra from measured wavelengths to shorter and longer wavelengths. Our new algorithm also benefits from a fast and flexible implementation.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum intelligible range of the human voice
Boren, Braxton
This dissertation examines the acoustics of the spoken voice at high levels and the maximum number of people that could hear such a voice unamplified in the open air. In particular, it examines an early auditory experiment by Benjamin Franklin which sought to determine the maximum intelligible crowd for the Anglican preacher George Whitefield in the eighteenth century. Using Franklin's description of the experiment and a noise source on Front Street, the geometry and diffraction effects of such a noise source are examined to more precisely pinpoint Franklin's position when Whitefield's voice ceased to be intelligible. Based on historical maps, drawings, and prints, the geometry and material of Market Street is constructed as a computer model which is then used to construct an acoustic cone tracing model. Based on minimal values of the Speech Transmission Index (STI) at Franklin's position, Whitefield's on-axis Sound Pressure Level (SPL) at 1 m is determined, leading to estimates centering around 90 dBA. Recordings are carried out on trained actors and singers to determine their maximum time-averaged SPL at 1 m. This suggests that the greatest average SPL achievable by the human voice is 90-91 dBA, similar to the median estimates for Whitefield's voice. The sites of Whitefield's largest crowds are acoustically modeled based on historical evidence and maps. Based on Whitefield's SPL, the minimal STI value, and the crowd's background noise, this allows a prediction of the minimally intelligible area for each site. These yield maximum crowd estimates of 50,000 under ideal conditions, while crowds of 20,000 to 30,000 seem more reasonable when the crowd was reasonably quiet and Whitefield's voice was near 90 dBA.
Guillemot, Sylvain
2008-01-01
Given a set of leaf-labeled trees with identical leaf sets, the well-known "Maximum Agreement SubTree" problem (MAST) consists of finding a subtree homeomorphically included in all input trees and with the largest number of leaves. Its variant called "Maximum Compatible Tree" (MCT) is less stringent, as it allows the input trees to be refined. Both problems are of particular interest in computational biology, where trees encountered have often small degrees. In this paper, we study the parameterized complexity of MAST and MCT with respect to the maximum degree, denoted by D, of the input trees. It is known that MAST is polynomial for bounded D. As a counterpart, we show that the problem is W[1]-hard with respect to parameter D. Moreover, elying on recent advances in parameterized complexity we obtain a tight lower bound: while MAST can be solved in O(N^{O(D)}) time where N denotes the input length, we show that an O(N^{o(D)}) bound is not achievable, unless SNP is contained in SE. We also show that MCT is W[1...
Jacques, Paul F; Wang, Huifen
2014-05-01
A large body of observational studies and randomized controlled trials (RCTs) has examined the role of dairy products in weight loss and maintenance of healthy weight. Yogurt is a dairy product that is generally very similar to milk, but it also has some unique properties that may enhance its possible role in weight maintenance. This review summarizes the human RCT and prospective observational evidence on the relation of yogurt consumption to the management and maintenance of body weight and composition. The RCT evidence is limited to 2 small, short-term, energy-restricted trials. They both showed greater weight losses with yogurt interventions, but the difference between the yogurt intervention and the control diet was only significant in one of these trials. There are 5 prospective observational studies that have examined the association between yogurt and weight gain. The results of these studies are equivocal. Two of these studies reported that individuals with higher yogurt consumption gained less weight over time. One of these same studies also considered changes in waist circumference (WC) and showed that higher yogurt consumption was associated with smaller increases in WC. A third study was inconclusive because of low statistical power. A fourth study observed no association between changes in yogurt intake and weight gain, but the results suggested that those with the largest increases in yogurt intake during the study also had the highest increase in WC. The final study examined weight and WC change separately by sex and baseline weight status and showed benefits for both weight and WC changes for higher yogurt consumption in overweight men, but it also found that higher yogurt consumption in normal-weight women was associated with a greater increase in weight over follow-up. Potential underlying mechanisms for the action of yogurt on weight are briefly discussed.
Maziero, G C; Baunwart, C; Toledo, M C
2001-05-01
The theoretical maximum daily intakes (TMDI) of the phenolic antioxidants butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and tertbutyl hydroquinone (TBHQ) in Brazil were estimated using food consumption data derived from a household economic survey and a packaged goods market survey. The estimates were based on maximum levels of use of the food additives specified in national food standards. The calculated intakes of the three additives for the mean consumer were below the ADIs. Estimates of TMDI for BHA, BHT and TBHQ ranged from 0.09 to 0.15, 0.05 to 0.10 and 0.07 to 0.12 mg/kg of body weight, respectively. To check if the additives are actually used at their maximum authorized levels, analytical determinations of these compounds in selected food categories were carried out using HPLC with UV detection. BHT and TBHQ concentrations in foodstuffs considered to be representive sources of these antioxidants in the diet were below the respective maximum permitted levels. BHA was not detected in any of the analysed samples. Based on the maximal approach and on the analytical data, it is unlikely that the current ADI of BHA (0.5 mg/kg body weight), BHT (0.3 mg/kg body weight) and TBHQ (0.7 mg/kg body weight) will be exceeded in practice by the average Brazilian consumer.
The 220-age equation does not predict maximum heart rate in children and adolescents
Verschuren, Olaf; Maltais, Desiree B.; Takken, Tim
Our primary purpose was to provide maximum heart rate (HR(max)) values for ambulatory children with cerebral palsy (CP). The secondary purpose was to determine the effects of age, sex, ambulatory ability, height, and weight on HR(max). In 362 ambulatory children and adolescents with CP (213 males
Relationship between oral status and maximum bite force in preschool children
Ching-Ming Su
2009-03-01
Conclusion: By combining the results of this study, it was concluded that associations of bite force with factors like age, maximum mouth opening and the number of teeth in contact were clearer than for other variables such as body height, body weight, occlusal pattern, and tooth decay or fillings.
The 220-age equation does not predict maximum heart rate in children and adolescents
Verschuren, Olaf; Maltais, Desiree B.; Takken, Tim
2011-01-01
Our primary purpose was to provide maximum heart rate (HR(max)) values for ambulatory children with cerebral palsy (CP). The secondary purpose was to determine the effects of age, sex, ambulatory ability, height, and weight on HR(max). In 362 ambulatory children and adolescents with CP (213 males an
Yu, Hwa-Lung; Wang, Chih-Hsin
2013-02-05
Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.
Increasing the weight of minimum spanning trees
Frederickson, G.N.; Solis-Oba, R. [Purdue Univ., West Lafayette, IN (United States)
1996-12-31
Given an undirected connected graph G and a cost function for increasing edge weights, the problem of determining the maximum increase in the weight of the minimum spanning trees of G subject to a budget constraint is investigated. Two versions of the problem are considered. In the first, each edge has a cost function that is linear in the weight increase. An algorithm is presented that solves this problem in strongly polynomial time. In the second version, the edge weights are fixed but an edge can be removed from G at a unit cost. This version is shown to be NP-hard. An {Omega}(1/ log k)-approximation algorithm is presented for it, where k is the number of edges to be removed.
Flux calculations in an inhomogeneous Universe: weighting a flux-limited galaxy sample
Koers, Hylke B J
2009-01-01
Many astrophysical problems arising within the context of ultra-high energy cosmic rays, very-high energy gamma rays or neutrinos, require calculation of the flux produced by sources tracing the distribution of galaxies in the Universe. We discuss a simple weighting scheme, an application of the method introduced by Lynden-Bell in 1971, that allows the calculation of the flux sky map directly from a flux-limited galaxy catalog without cutting a volume-limited subsample. Using this scheme, the galaxy distribution can be modeled up to large scales while representing the distribution in the nearby Universe with maximum accuracy. We consider fluctuations in the flux map arising from the finiteness of the galaxy sample. We show how these fluctuations are reduced by the weighting scheme and discuss how the remaining fluctuations limit the applicability of the method.
Comparative analysis of exercise equipment jerk in weightlifting and weight sport
Djim V.Y.
2013-11-01
Full Text Available The approaches to the analysis of exercise equipment in weightlifting and weight sport. The method of photographic images and analysis videogram movement. Exercise performed once with the weights of 50, 65 and 75% of the maximum limit. Kettlebell snatch carried out using two dumbbells weighing 32 kg each. An improved technique spurt in which efficiency is the greatest. It is noted that under the dash with undergrowth performed the lead from the platform heels. This technique does not allow the movement of the full potential of the athlete. 'Technique of the jerk with full support legs for the entire period lifting a barbell. Found that it reduces the lift rod and provides the trajectory of the rod with the continuous growth to its final value.
Constructing Maximum Entropy Language Models for Movie Review Subjectivity Analysis
Bo Chen; Hui He; Jun Guo
2008-01-01
Document subjectivity analysis has become an important aspect of web text content mining. This problem is similar to traditional text categorization, thus many related classification techniques can be adapted here. However, there is one significant difference that more language or semantic information is required for better estimating the subjectivity of a document. Therefore, in this paper, our focuses are mainly on two aspects. One is how to extract useful and meaningful language features, and the other is how to construct appropriate language models efficiently for this special task. For the first issue, we conduct a Global-Filtering and Local-Weighting strategy to select and evaluate language features in a series of n-grams with different orders and within various distance-windows. For the second issue, we adopt Maximum Entropy (MaxEnt) modeling methods to construct our language model framework. Besides the classical MaxEnt models, we have also constructed two kinds of improved models with Gaussian and exponential priors respectively. Detailed experiments given in this paper show that with well selected and weighted language features, MaxEnt models with exponential priors are significantly more suitable for the text subjectivity analysis task.
The Effect of Sunspot Weighting
Svalgaard, Leif; Cagnotti, Marco; Cortesi, Sergio
2017-02-01
Although W. Brunner began to weight sunspot counts (from 1926), using a method whereby larger spots were counted more than once, he compensated for the weighting by not counting enough smaller spots in order to maintain the same reduction factor (0.6) as was used by his predecessor A. Wolfer to reduce the count to R. Wolf's original scale, so that the weighting did not have any effect on the scale of the sunspot number. In 1947, M. Waldmeier formalized the weighting (on a scale from 1 to 5) of the sunspot count made at Zurich and its auxiliary station Locarno. This explicit counting method, when followed, inflates the relative sunspot number over that which corresponds to the scale set by Wolfer (and matched by Brunner). Recounting some 60,000 sunspots on drawings from the reference station Locarno shows that the number of sunspots reported was "over counted" by {≈} 44 % on average, leading to an inflation (measured by an effective weight factor) in excess of 1.2 for high solar activity. In a double-blind parallel counting by the Locarno observer M. Cagnotti, we determined that Svalgaard's count closely matches that of Cagnotti, allowing us to determine from direct observation the daily weight factor for spots since 2003 (and sporadically before). The effective total inflation turns out to have two sources: a major one (15 - 18 %) caused by weighting of spots, and a minor source (4 - 5 %) caused by the introduction of the Zürich classification of sunspot groups which increases the group count by 7 - 8 % and the relative sunspot number by about half that. We find that a simple empirical equation (depending on the activity level) fits the observed factors well, and use that fit to estimate the weighting inflation factor for each month back to the introduction of effective inflation in 1947 and thus to be able to correct for the over-counts and to reduce sunspot counting to the Wolfer method in use from 1894 onwards.
Healthy weight game!: Lose weight together
Lentelink, S.J.; Spil, Antonius A.M.; Broens, T.; Broens, T.H.F.; Hermens, Hermanus J.; Jones, Valerie M.
2013-01-01
Overweight and obesity pose a serious and increasing problem worldwide. Current treatment methods can result in weight loss in the short term but often fail in the longer term. Increasing motivation and thereby improving adherence can be a key factor in achieving the needed behavioral change. One
Height, weight, weight change and risk of breast cancer in Rio de Janeiro, Brazil
Anelise Bezerra de Vasconcelos
Full Text Available CONTEXT: The relationship between body size and breast cancer still remains controversial in considering menopausal status. OBJECTIVE: To evaluate the association of height, weight and weight changes with breast cancer in the city of Rio de Janeiro, Brazil. DESIGN: Case-control study. SETTING: National Cancer Institute (INCA, Rio de Janeiro, Brazil, and State University of Rio de Janeiro (UERJ. SAMPLE: 177 incident cases of invasive breast cancer admitted to the main hospital of INCA between May 1995 and February 1996, and 377 controls recruited from among female visitors to the same hospital. MAIN MEASUREMENTS: Height and weight were measured and information on maximum weight, weight at ages 18 and 30 years, and potential risk factors were ascertained by interview at the hospital. RESULTS: Height was not related to risk of breast cancer among both pre and postmenopausal women. Nevertheless, women in this study were shorter than in studies that have found a positive association. Premenopausal women in the upper quartile of recent body mass index (BMI and maximum BMI showed a reduced risk of breast cancer (P for trend <= 0.03. Weight loss between ages 18 and 30 years and from 18 years to present was also associated with breast cancer among premenopausal women. CONCLUSIONS: These findings may merely indicate the known association between leanness and breast cancer. Further studies should explore the role of weight loss on breast cancer risk.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
45 CFR 2522.245 - How are living allowances disbursed?
2010-10-01
..., Requirements, and Benefits § 2522.245 How are living allowances disbursed? A living allowance is not a wage and programs may not pay living allowances on an hourly basis. Programs must distribute the living allowance at... 45 Public Welfare 4 2010-10-01 2010-10-01 false How are living allowances disbursed? 2522.245...
Predictors of weight maintenance
Pasman, W.J.; Saris, W.H.M.; Westerterp-Plantenga, M.S.
1999-01-01
Objective: To obtain predictors of weight maintenance after a weight-loss intervention. Research Methods and Procedures: An overall analysis of data from two-long intervention studies [n = 67 women; age: 37.9±1.0 years; body weight (BW): 87.0±1.2 kg; body mass index: 32.1±0.5 kg·m-2; % body fat: 42.
Predictors of weight maintenance
Pasman, W.J.; Saris, W.H.M.; Westerterp-Plantenga, M.S.
1999-01-01
Objective: To obtain predictors of weight maintenance after a weight-loss intervention. Research Methods and Procedures: An overall analysis of data from two-long intervention studies [n = 67 women; age: 37.9±1.0 years; body weight (BW): 87.0±1.2 kg; body mass index: 32.1±0.5 kg·m-2; % body fat: 42.
Predictors of weight maintenance
Pasman, W.J.; Saris, W.H.M.; Westerterp-Plantenga, M.S.
1999-01-01
Objective: To obtain predictors of weight maintenance after a weight-loss intervention. Research Methods and Procedures: An overall analysis of data from two-long intervention studies [n = 67 women; age: 37.9±1.0 years; body weight (BW): 87.0±1.2 kg; body mass index: 32.1±0.5 kg·m-2; % body fat:
WU An-Cai; XU Xin-Jian; WU Zhi-Xi; WANG Ying-Hai
2007-01-01
We investigate the dynamics of random walks on weighted networks. Assuming that the edge weight and the node strength are used as local information by a random walker. Two kinds of walks, weight-dependent walk and strength-dependent walk, are studied. Exact expressions for stationary distribution and average return time are derived and confirmed by computer simulations. The distribution of average return time and the mean-square that a weight-dependent walker can arrive at a new territory more easily than a strength-dependent one.
Adaptive Context Tree Weighting
O'Neill, Alexander; Shao, Wen; Sunehag, Peter
2012-01-01
We describe an adaptive context tree weighting (ACTW) algorithm, as an extension to the standard context tree weighting (CTW) algorithm. Unlike the standard CTW algorithm, which weights all observations equally regardless of the depth, ACTW gives increasing weight to more recent observations, aiming to improve performance in cases where the input sequence is from a non-stationary distribution. Data compression results show ACTW variants improving over CTW on merged files from standard compression benchmark tests while never being significantly worse on any individual file.
42 CFR 495.308 - Net average allowable costs as the basis for determining the incentive payment.
2010-10-01
... 42 Public Health 5 2010-10-01 2010-10-01 false Net average allowable costs as the basis for... Net average allowable costs as the basis for determining the incentive payment. (a) The first year of..., implementation or upgrade of certified electronic health records technology. (2) The maximum net...
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Computing the stretch factor and maximum detour of paths, trees, and cycles in the normed space
Wulff-Nilsen, Christian; Grüne, Ansgar; Klein, Rolf;
2012-01-01
The stretch factor and maximum detour of a graph G embedded in a metric space measure how well G approximates the minimum complete graph containing G and the metric space, respectively. In this paper we show that computing the stretch factor of a rectilinear path in L 1 plane has a lower bound of Ω......(n log n) in the algebraic computation tree model and describe a worst-case O(σn log 2 n) time algorithm for computing the stretch factor or maximum detour of a path embedded in the plane with a weighted fixed orientation metric defined by σ ... compute the stretch factor or maximum detour of trees and cycles in O(σn log d+1 n) time. We also obtain an optimal O(n) time algorithm for computing the maximum detour of a monotone rectilinear path in L 1 plane. © 2012 World Scientific...
Maximum principle and convergence of central schemes based on slope limiters
Mehmetoglu, Orhan
2012-01-01
A maximum principle and convergence of second order central schemes is proven for scalar conservation laws in dimension one. It is well known that to establish a maximum principle a nonlinear piecewise linear reconstruction is needed and a typical choice is the minmod limiter. Unfortunately, this implies that the scheme uses a first order reconstruction at local extrema. The novelty here is that we allow local nonlinear reconstructions which do not reduce to first order at local extrema and still prove maximum principle and convergence. © 2011 American Mathematical Society.
Sansone, Randy A; Sansone, Lori A
2014-07-01
Acute marijuana use is classically associated with snacking behavior (colloquially referred to as "the munchies"). In support of these acute appetite-enhancing effects, several authorities report that marijuana may increase body mass index in patients suffering from human immunodeficiency virus and cancer. However, for these medical conditions, while appetite may be stimulated, some studies indicate that weight gain is not always clinically meaningful. In addition, in a study of cancer patients in which weight gain did occur, it was less than the comparator drug (megestrol). However, data generally suggest that acute marijuana use stimulates appetite, and that marijuana use may stimulate appetite in low-weight individuals. As for large epidemiological studies in the general population, findings consistently indicate that users of marijuana tend to have lower body mass indices than nonusers. While paradoxical and somewhat perplexing, these findings may be explained by various study confounds, such as potential differences between acute versus chronic marijuana use; the tendency for marijuana use to be associated with other types of drug use; and/or the possible competition between food and drugs for the same reward sites in the brain. Likewise, perhaps the effects of marijuana are a function of initial weight status-i.e., maybe marijuana is a metabolic regulatory substance that increases body weight in low-weight individuals but not in normal-weight or overweight individuals. Only further research will clarify the complex relationships between marijuana and body weight.
Holstein, Bjørn Evald; Due, Pernille; Brixval, Carina Sjöberg;
2017-01-01
day) communication with friends through cellphones, SMS messages, or Internet (1.66, 1.03-2.67). In the full population, overweight/obese weight status was associated with not perceiving best friend as a confidant (1.59, 1.11-2.28). No associations were found between weight status and number of close...
... only. To assess the weight of children or teenagers, see the Child and Teen BMI Calculator . Top of Page Want to learn more? Preventing Weight Gain Choosing a lifestyle that includes good eating habits and daily physical activity can help you maintain ...
... of your weight loss. When to Contact a Medical Professional Call your health care provider if: You or a family member loses more ... to Expect at Your Office Visit The ... be asked questions about your medical history and symptoms, including: How much weight have ...
Weight discrimination and bullying.
Puhl, Rebecca M; King, Kelly M
2013-04-01
Despite significant attention to the medical impacts of obesity, often ignored are the negative outcomes that obese children and adults experience as a result of stigma, bias, and discrimination. Obese individuals are frequently stigmatized because of their weight in many domains of daily life. Research spanning several decades has documented consistent weight bias and stigmatization in employment, health care, schools, the media, and interpersonal relationships. For overweight and obese youth, weight stigmatization translates into pervasive victimization, teasing, and bullying. Multiple adverse outcomes are associated with exposure to weight stigmatization, including depression, anxiety, low self-esteem, body dissatisfaction, suicidal ideation, poor academic performance, lower physical activity, maladaptive eating behaviors, and avoidance of health care. This review summarizes the nature and extent of weight stigmatization against overweight and obese individuals, as well as the resulting consequences that these experiences create for social, psychological, and physical health for children and adults who are targeted. Copyright © 2013 Elsevier Ltd. All rights reserved.
Maximum tunneling velocities in symmetric double well potentials
Manz, Jörn; Schmidt, Burkhard; Yang, Yonggang
2014-01-01
We consider coherent tunneling of one-dimensional model systems in non-cyclic or cyclic symmetric double well potentials. Generic potentials are constructed which allow for analytical estimates of the quantum dynamics in the non-relativistic deep tunneling regime, in terms of the tunneling distance, barrier height and mass (or moment of inertia). For cyclic systems, the results may be scaled to agree well with periodic potentials for which semi-analytical results in terms of Mathieu functions exist. Starting from a wavepacket which is initially localized in one of the potential wells, the subsequent periodic tunneling is associated with tunneling velocities. These velocities (or angular velocities) are evaluated as the ratio of the flux densities versus the probability densities. The maximum velocities are found under the top of the barrier where they scale as the square root of the ratio of barrier height and mass (or moment of inertia), independent of the tunneling distance. They are applied exemplarily to ...
Vector control structure of an asynchronous motor at maximum torque
Chioncel, C. P.; Tirian, G. O.; Gillich, N.; Raduca, E.
2016-02-01
Vector control methods offer the possibility to gain high performance, being widely used. Certain applications require an optimum control in limit operating conditions, as, at maximum torque, that is not always satisfied. The paper presents how the voltage and the frequency for an asynchronous machine (ASM) operating at variable speed are determinate, with an accent on the method that keeps the rotor flux constant. The simulation analyses consider three load types: variable torque and speed, variable torque and constant speed, constant torque and variable speed. The final values of frequency and voltage are obtained through the proposed control schemes with one controller using the simulation language based on the Maple module. The dynamic analysis of the system is done for the case with P and PI controller and allows conclusions on the proposed method, which can have different applications, as the ASM in wind turbines.
Network Decomposition and Maximum Independent Set Part Ⅰ: Theoretic Basis
朱松年; 朱嫱
2003-01-01
The structure and characteristics of a connected network are analyzed, and a special kind of sub-network, which can optimize the iteration processes, is discovered. Then, the sufficient and necessary conditions for obtaining the maximum independent set are deduced. It is found that the neighborhood of this sub-network possesses the similar characters, but both can never be allowed incorporated together. Particularly, it is identified that the network can be divided into two parts by a certain style, and then both of them can be transformed into a pair sets network, where the special sub-networks and their neighborhoods appear alternately distributed throughout the entire pair sets network. By use of this characteristic, the network decomposed enough without losing any solutions is obtained. All of these above will be able to make well ready for developing a much better algorithm with polynomial time bound for an odd network in the the application research part of this subject.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Monaco, James P; Madabhushi, Anant
2012-12-01
Many estimation tasks require Bayesian classifiers capable of adjusting their performance (e.g. sensitivity/specificity). In situations where the optimal classification decision can be identified by an exhaustive search over all possible classes, means for adjusting classifier performance, such as probability thresholding or weighting the a posteriori probabilities, are well established. Unfortunately, analogous methods compatible with Markov random fields (i.e. large collections of dependent random variables) are noticeably absent from the literature. Consequently, most Markov random field (MRF) based classification systems typically restrict their performance to a single, static operating point (i.e. a paired sensitivity/specificity). To address this deficiency, we previously introduced an extension of maximum posterior marginals (MPM) estimation that allows certain classes to be weighted more heavily than others, thus providing a means for varying classifier performance. However, this extension is not appropriate for the more popular maximum a posteriori (MAP) estimation. Thus, a strategy for varying the performance of MAP estimators is still needed. Such a strategy is essential for several reasons: (1) the MAP cost function may be more appropriate in certain classification tasks than the MPM cost function, (2) the literature provides a surfeit of MAP estimation implementations, several of which are considerably faster than the typical Markov Chain Monte Carlo methods used for MPM, and (3) MAP estimation is used far more often than MPM. Consequently, in this paper we introduce multiplicative weighted MAP (MWMAP) estimation-achieved via the incorporation of multiplicative weights into the MAP cost function-which allows certain classes to be preferred over others. This creates a natural bias for specific classes, and consequently a means for adjusting classifier performance. Similarly, we show how this multiplicative weighting strategy can be applied to the MPM
A maximum entropy model for opinions in social groups
Davis, Sergio; Navarrete, Yasmín; Gutiérrez, Gonzalo
2014-04-01
We study how the opinions of a group of individuals determine their spatial distribution and connectivity, through an agent-based model. The interaction between agents is described by a Hamiltonian in which agents are allowed to move freely without an underlying lattice (the average network topology connecting them is determined from the parameters). This kind of model was derived using maximum entropy statistical inference under fixed expectation values of certain probabilities that (we propose) are relevant to social organization. Control parameters emerge as Lagrange multipliers of the maximum entropy problem, and they can be associated with the level of consequence between the personal beliefs and external opinions, and the tendency to socialize with peers of similar or opposing views. These parameters define a phase diagram for the social system, which we studied using Monte Carlo Metropolis simulations. Our model presents both first and second-order phase transitions, depending on the ratio between the internal consequence and the interaction with others. We have found a critical value for the level of internal consequence, below which the personal beliefs of the agents seem to be irrelevant.
Distribution of phytoplankton groups within the deep chlorophyll maximum
Latasa, Mikel
2016-11-01
The fine vertical distribution of phytoplankton groups within the deep chlorophyll maximum (DCM) was studied in the NE Atlantic during summer stratification. A simple but unconventional sampling strategy allowed examining the vertical structure with ca. 2 m resolution. The distribution of Prochlorococcus, Synechococcus, chlorophytes, pelagophytes, small prymnesiophytes, coccolithophores, diatoms, and dinoflagellates was investigated with a combination of pigment-markers, flow cytometry and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer. The more symmetrical distribution of chlorophyll than cells around the DCM peak was due to the increase of pigment per cell with depth. We found a vertical alignment of phytoplankton groups within the DCM layer indicating preferences for different ecological niches in a layer with strong gradients of light and nutrients. Prochlorococcus occupied the shallowest and diatoms the deepest layers. Dinoflagellates, Synechococcus and small prymnesiophytes preferred shallow DCM layers, and coccolithophores, chlorophytes and pelagophytes showed a preference for deep layers. Cell size within groups changed with depth in a pattern related to their mean size: the cell volume of the smallest group increased the most with depth while the cell volume of the largest group decreased the most. The vertical alignment of phytoplankton groups confirms that the DCM is not a homogeneous entity and indicates groups’ preferences for different ecological niches within this layer.
METHOD FOR DETERMINING THE MAXIMUM ARRANGEMENT FACTOR OF FOOTWEAR PARTS
DRIŞCU Mariana
2014-05-01
Full Text Available By classic methodology, designing footwear is a very complex and laborious activity. That is because classic methodology requires many graphic executions using manual means, which consume a lot of the producer’s time. Moreover, the results of this classical methodology may contain many inaccuracies with the most unpleasant consequences for the footwear producer. Thus, the costumer that buys a footwear product by taking into consideration the characteristics written on the product (size, width can notice after a period that the product has flaws because of the inadequate design. In order to avoid this kind of situations, the strictest scientific criteria must be followed when one designs a footwear product. The decisive step in this way has been made some time ago, when, as a result of powerful technical development and massive implementation of electronical calculus systems and informatics, This paper presents a product software for determining all possible arrangements of a footwear product’s reference points, in order to automatically acquire the maximum arrangement factor. The user multiplies the pattern in order to find the economic arrangement for the reference points. In this purpose, the user must probe few arrangement variants, in the translation and rotate-translation system. The same process is used in establishing the arrangement factor for the two points of reference of the designed footwear product. After probing several variants of arrangement in the translation and rotation and translation systems, the maximum arrangement factors are chosen. This allows the user to estimate the material wastes.
Maximum hydrogen production from genetically modified microalgae biomass
Vargas, Jose; Kava, Vanessa; Ordonez, Juan
A transient mathematical model for managing microalgae derived H2 production as a source of renewable energy is developed for a well stirred photobioreactor, PBR. The model allows for the determination of microalgae and H2 mass fractions produced by the PBR in time. A Michaelis-Menten expression is proposed for modeling the rate of H2 production, which introduces an expression to calculate the resulting effect on H2 production rate after genetically modifying the microalgae. The indirect biophotolysis process was used. Therefore, an opportunity was found to optimize the aerobic to anaerobic stages time ratio of the cycle for maximum H2 production rate, i.e., the process rhythm. A system thermodynamic optimization is conducted with the model equations to find accurately the optimal system operating rhythm for maximum H2 production rate, and how wild and genetically modified species compare to each other. The maxima found are sharp, showing up to a ~60% variation in hydrogen production rate within 2 days around the optimal rhythm, which highlights the importance of system operation in such condition. Therefore, the model is expected to be useful for design, control and optimization of H2 production. Brazilian National Council of Scientific and Technological Development, CNPq (project 482336/2012-9).
Superfast maximum-likelihood reconstruction for quantum tomography
Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon
2017-06-01
Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.
IDENTIFICATION OF IDEOTYPES BY CANONICAL ANALYSIS IN Panicum maximum
Janaina Azevedo Martuscello
2015-04-01
Full Text Available Grouping of genotypes by canonical variable analysis is an important tool in breeding. It allows the grouping of individuals with similar characteristics that are associated with superior agronomic performance and may indicate the ideal profile of a plant for the region. The objective of the present study was to define, by canonical analysis, the agronomic profile of Panicum maximum plants adapted to the Agreste region. The experiment was conducted in a completely randomized design with 28 treatments, 22 genotypes of Panicum maximum, and cultivars Mombasa, Tanzania, Massai, Milenio, BRS Zuri, and BRS Tamani in triplicate in 4-m² plots. Plots were harvested five times and the following traits were evaluated: plant height; total, leaf, and stem; dead dry matter yields; leaf:stem ratio; leaf percentage; and volumetric density of forage. The analysis of canonical variables was performed based on the phenotypic means of the evaluated traits and on the residual variance and covariance matrix. Genotype PM34 showed higher mean leaf dry matter yield under the conditions of the Agreste of Alagoas (on average 53% higher than cultivars Mombasa, Tanzania, Milenio and Massai. It was possible to summarize the variation observed in eight agronomic characteristics in only two canonical variables accounting for 81.44 % of the data variation. The ideotype plant adapted to the conditions of the Agreste should be tall and present high leaf yield, leaf percentage, and leaf:stem ratio, and intermediate values of volumetric density of forage.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Gastric stimulation for weight loss
Meir Mizrahi; Ami Ben Ya'acov; Yaron Ilan
2012-01-01
The prevalence of obesity is growing to epidemic proportions,and there is clearly a need for minimally invasive therapies with few adverse effects that allow for sustained weight loss.Behavior and lifestyle therapy are safe treatments for obesity in the short term,but the durability of the weight loss is limited.Although promising obesity drugs are in development,the currently available drugs lack efficacy or have unacceptable side effects.Surgery leads to long-term weight loss,but it is associated with morbidity and mortality.Gastric electrical stimulation (GES) has received increasing attention as a potential tool for treating obesity and gastrointestinal dysmotility disorders.GES is a promising,minimally invasive,safe,and effective method for treating obesity.External gastric pacing is aimed at alteration of the motility of the gastrointestinal tract in a way that will alter absorption due to alteration of transit time.In addition,data from animal models and preliminary data from human trials suggest a role for the gut-brain axis in the mechanism of GES.This may involve alteration of secretion of hormones associated with hunger or satiety.Patient selection for gastric stimulation therapy seems to be an important determinant of the treatment's outcome.Here,we review the current status,potential mechanisms of action,and possible future applications of gastric stimulation for obesity.
Gastric stimulation for weight loss
Mizrahi, Meir; Ben Ya'acov, Ami; Ilan, Yaron
2012-01-01
The prevalence of obesity is growing to epidemic proportions, and there is clearly a need for minimally invasive therapies with few adverse effects that allow for sustained weight loss. Behavior and lifestyle therapy are safe treatments for obesity in the short term, but the durability of the weight loss is limited. Although promising obesity drugs are in development, the currently available drugs lack efficacy or have unacceptable side effects. Surgery leads to long-term weight loss, but it is associated with morbidity and mortality. Gastric electrical stimulation (GES) has received increasing attention as a potential tool for treating obesity and gastrointestinal dysmotility disorders. GES is a promising, minimally invasive, safe, and effective method for treating obesity. External gastric pacing is aimed at alteration of the motility of the gastrointestinal tract in a way that will alter absorption due to alteration of transit time. In addition, data from animal models and preliminary data from human trials suggest a role for the gut-brain axis in the mechanism of GES. This may involve alteration of secretion of hormones associated with hunger or satiety. Patient selection for gastric stimulation therapy seems to be an important determinant of the treatment’s outcome. Here, we review the current status, potential mechanisms of action, and possible future applications of gastric stimulation for obesity. PMID:22654422
A new 3-cylinder 1.0L engine development for light weight and good fuel economy
Nara, T.; Kusunoki, T.; Sugita, N. [Toyota Motor Corp., Aichi (Japan); Mishima, E. [Daihatsu Motor Co. LTD., Osaka (Japan)
2005-07-01
In order to meet the requirement for CO{sub 2} emission reduction, Toyota and Daihatsu have jointly developed the 1KR-FE new 3-cylinder 1.0L gasoline engine. Beside excellent fuel economy, the development targets for this new engine were; top of class performance, light-weight and compactness. In addition to friction reduction obtained by using only 3 cylinders instead of 4, benefits were gained by using thin piston rings with low tension. Also a new type of resin coat on the piston skirt contributes to lower friction. The result is 109g/km of CO{sub 2} for the new Toyota Aygo. Thanks to optimisation of combustion chamber design and the introduction of VVT-i, a maximum power output of 50 kW and a maximum torque of 93 Nm were achieved. Also a low-speed torque of 85 Nm is already available at 2000 rpm. All intake air system components are made in plastic to reduce engine weight and to make a highly integrated packaging possible. A newly developed cast iron liner with small wall thickness allows only 7 mm spacing between cylinder bores leading to the compactness of the engine. To reduce the vibration level, moving parts weight was reduced, rigidity of cylinder block was increased and a crankshaft with 3 counter weights was developed. Improved engine mountings combined with a torque rod system minimise idle speed vibration. (orig.)
7 CFR 3560.202 - Establishing rents and utility allowances.
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Establishing rents and utility allowances. 3560.202... Establishing rents and utility allowances. (a) General. Rents and utility allowances for rental units in Agency... Agency. (b) Agency approval. All rents and utility allowances set by borrowers are subject to...
ORDERED WEIGHTED DISTANCE MEASURE
Zeshui XU; Jian CHEN
2008-01-01
The aim of this paper is to develop an ordered weighted distance (OWD) measure, which is thegeneralization of some widely used distance measures, including the normalized Hamming distance, the normalized Euclidean distance, the normalized geometric distance, the max distance, the median distance and the min distance, etc. Moreover, the ordered weighted averaging operator, the generalized ordered weighted aggregation operator, the ordered weighted geometric operator, the averaging operator, the geometric mean operator, the ordered weighted square root operator, the square root operator, the max operator, the median operator and the min operator axe also the special cases of the OWD measure. Some methods depending on the input arguments are given to determine the weights associated with the OWD measure. The prominent characteristic of the OWD measure is that it can relieve (or intensify) the influence of unduly large or unduly small deviations on the aggregation results by assigning them low (or high) weights. This desirable characteristic makes the OWD measure very suitable to be used in many actual fields, including group decision making, medical diagnosis, data mining, and pattern recognition, etc. Finally, based on the OWD measure, we develop a group decision making approach, and illustrate it with a numerical example.
Maximum tunneling velocities in symmetric double well potentials
Manz, Jörn [State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Laser Spectroscopy, Shanxi University, 92, Wucheng Road, Taiyuan 030006 (China); Institut für Chemie und Biochemie, Freie Universität Berlin, Takustr. 3, 14195 Berlin (Germany); Schild, Axel [Institut für Chemie und Biochemie, Freie Universität Berlin, Takustr. 3, 14195 Berlin (Germany); Schmidt, Burkhard, E-mail: burkhard.schmidt@fu-berlin.de [Institut für Mathematik, Freie Universität Berlin, Arnimallee 6, 14195 Berlin (Germany); Yang, Yonggang, E-mail: ygyang@sxu.edu.cn [State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Laser Spectroscopy, Shanxi University, 92, Wucheng Road, Taiyuan 030006 (China)
2014-10-17
Highlights: • Coherent tunneling in one-dimensional symmetric double well potentials. • Potentials for analytical estimates in the deep tunneling regime. • Maximum velocities scale as the square root of the ratio of barrier height and mass. • In chemical physics maximum tunneling velocities are in the order of a few km/s. - Abstract: We consider coherent tunneling of one-dimensional model systems in non-cyclic or cyclic symmetric double well potentials. Generic potentials are constructed which allow for analytical estimates of the quantum dynamics in the non-relativistic deep tunneling regime, in terms of the tunneling distance, barrier height and mass (or moment of inertia). For cyclic systems, the results may be scaled to agree well with periodic potentials for which semi-analytical results in terms of Mathieu functions exist. Starting from a wavepacket which is initially localized in one of the potential wells, the subsequent periodic tunneling is associated with tunneling velocities. These velocities (or angular velocities) are evaluated as the ratio of the flux densities versus the probability densities. The maximum velocities are found under the top of the barrier where they scale as the square root of the ratio of barrier height and mass (or moment of inertia), independent of the tunneling distance. They are applied exemplarily to several prototypical molecular models of non-cyclic and cyclic tunneling, including ammonia inversion, Cope rearrangement of semibullvalene, torsions of molecular fragments, and rotational tunneling in strong laser fields. Typical maximum velocities and angular velocities are in the order of a few km/s and from 10 to 100 THz for our non-cyclic and cyclic systems, respectively, much faster than time-averaged velocities. Even for the more extreme case of an electron tunneling through a barrier of height of one Hartree, the velocity is only about one percent of the speed of light. Estimates of the corresponding time scales for
Efficiency at maximum power of a discrete feedback ratchet
Jarillo, Javier; Tangarife, Tomás; Cao, Francisco J.
2016-01-01
Efficiency at maximum power is found to be of the same order for a feedback ratchet and for its open-loop counterpart. However, feedback increases the output power up to a factor of five. This increase in output power is due to the increase in energy input and the effective entropy reduction obtained as a consequence of feedback. Optimal efficiency at maximum power is reached for time intervals between feedback actions two orders of magnitude smaller than the characteristic time of diffusion over a ratchet period length. The efficiency is computed consistently taking into account the correlation between the control actions. We consider a feedback control protocol for a discrete feedback flashing ratchet, which works against an external load. We maximize the power output optimizing the parameters of the ratchet, the controller, and the external load. The maximum power output is found to be upper bounded, so the attainable extracted power is limited. After, we compute an upper bound for the efficiency of this isothermal feedback ratchet at maximum power output. We make this computation applying recent developments of the thermodynamics of feedback-controlled systems, which give an equation to compute the entropy reduction due to information. However, this equation requires the computation of the probability of each of the possible sequences of the controller's actions. This computation becomes involved when the sequence of the controller's actions is non-Markovian, as is the case in most feedback ratchets. We here introduce an alternative procedure to set strong bounds to the entropy reduction in order to compute its value. In this procedure the bounds are evaluated in a quasi-Markovian limit, which emerge when there are big differences between the stationary probabilities of the system states. These big differences are an effect of the potential strength, which minimizes the departures from the Markovianicity of the sequence of control actions, allowing also to
Speech processing using maximum likelihood continuity mapping
Hogden, John E. (Santa Fe, NM)
2000-01-01
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Speech processing using maximum likelihood continuity mapping
Hogden, J.E.
2000-04-18
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Blind Joint Maximum Likelihood Channel Estimation and Data Detection for SIMO Systems
Sheng Chen; Xiao-Chen Yang; Lei Chen; Lajos Hanzo
2007-01-01
A blind adaptive scheme is proposed for joint maximum likelihood (ML) channel estimation and data detection of singleinput multiple-output (SIMO) systems. The joint ML optimisation over channel and data is decomposed into an iterative optimisation loop. An efficient global optimisation algorithm called the repeated weighted boosting search is employed at the upper level to optimally identify the unknown SIMO channel model, and the Viterbi algorithm is used at the lower level to produce the maximum likelihood sequence estimation of the unknown data sequence. A simulation example is used to demonstrate the effectiveness of this joint ML optimisation scheme for blind adaptive SIMO systems.
Study on the Hungarian algorithm for the maximum likelihood data association problem
Wang Jianguo; He Peikun; Cao Wei
2007-01-01
A specialized Hungarian algorithm was developed here for the maximum likelihood data association problem with two implementation versions due to presence of false alarms and missed detections. The maximum likelihood data association problem is formulated as a bipartite weighted matching problem. Its duality and the optimality conditions are given. The Hungarian algorithm with its computational steps, data structure and computational complexity is presented. The two implementation versions, Hungarian forest (HF) algorithm and Hungarian tree (HT) algorithm, and their combination with the na(i)ve auction initialization are discussed. The computational results show that HT algorithm is slightly faster than HF algorithm and they are both superior to the classic Munkres algorithm.
Approximate shortest homotopic paths in weighted regions
Cheng, Siuwing
2012-02-01
A path P between two points s and t in a polygonal subdivision T with obstacles and weighted regions defines a class of paths that can be deformed to P without passing over any obstacle. We present the first algorithm that, given P and a relative error tolerance ε (0, 1), computes a path from this class with cost at most 1 + ε times the optimum. The running time is O(h 3/ε 2kn polylog (k,n,1/ε)), where k is the number of segments in P and h and n are the numbers of obstacles and vertices in T, respectively. The constant in the running time of our algorithm depends on some geometric parameters and the ratio of the maximum region weight to the minimum region weight. © 2012 World Scientific Publishing Company.
Approximate Shortest Homotopic Paths in Weighted Regions
Cheng, Siu-Wing
2010-01-01
Let P be a path between two points s and t in a polygonal subdivision T with obstacles and weighted regions. Given a relative error tolerance ε ∈(0,1), we present the first algorithm to compute a path between s and t that can be deformed to P without passing over any obstacle and the path cost is within a factor 1 + ε of the optimum. The running time is O(h 3/ε2 kn polylog(k, n, 1/ε)), where k is the number of segments in P and h and n are the numbers of obstacles and vertices in T, respectively. The constant in the running time of our algorithm depends on some geometric parameters and the ratio of the maximum region weight to the minimum region weight. © 2010 Springer-Verlag.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays
Andrea Trucco
2015-06-01
Full Text Available For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed. In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches.
Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays.
Trucco, Andrea; Traverso, Federico; Crocco, Marco
2015-01-01
For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed). In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches.
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Weight diagram construction of Lax operators
Carbon, S.L.; Piard, E.J.
1991-10-01
We review and expand methods introduced in our previous paper. It is proved that cyclic weight diagrams corresponding to representations of affine Lie algebras allow one to construct the associated Lax operator. The resultant Lax operator is in the Miura-like form and generates the modified KdV equations. The algorithm is extended to the super-symmetric case.
An interactive programme for weighted Steiner trees
Zanchetta do Nascimento, Marcelo; Ramos Batista, Valério; Raffa Coimbra, Wendhel
2015-01-01
We introduce a fully written programmed code with a supervised method for generating weighted Steiner trees. Our choice of the programming language, and the use of well- known theorems from Geometry and Complex Analysis, allowed this method to be implemented with only 764 lines of effective source code. This eases the understanding and the handling of this beta version for future developments.
Englberger, L.
1999-01-01
A programme of weight loss competitions and associated activities in Tonga, intended to combat obesity and the noncommunicable diseases linked to it, has popular support and the potential to effect significant improvements in health. PMID:10063662
... Some kids and teens are underweight because of eating disorders , like anorexia or bulimia, which ... weight. People from different races, ethnic groups, and nationalities tend to have different body fat ...
... Global Map Premature birth report card Careers Archives Pregnancy Before or between pregnancies Nutrition, weight & fitness Prenatal ... virus and pregnancy Folic acid Medicine safety and pregnancy Birth defects prevention Learn how to help reduce ...
... this page, enter your email address: Enter Email Address What's this? Submit What's this? Submit Button About Us Division Information Nutrition Physical Activity Overweight & Obesity Healthy Weight Breastfeeding Micronutrient Malnutrition ...
Menichetti, Giulia; Panzarasa, Pietro; Mondragón, Raúl J; Bianconi, Ginestra
2013-01-01
One of the most important challenges in network science is to quantify the information encoded in complex network structures. Disentangling randomness from organizational principles is even more demanding when networks have a multiplex nature. Multiplex networks are multilayer systems of $N$ nodes that can be linked in multiple interacting and co-evolving layers. In these networks, relevant information might not be captured if the single layers were analyzed separately. Here we demonstrate that such partial analysis of layers fails to capture significant correlations between weights and topology of complex multiplex networks. To this end, we study two weighted multiplex co-authorship and citation networks involving the authors included in the American Physical Society. We show that in these networks weights are strongly correlated with multiplex structure, and provide empirical evidence in favor of the advantage of studying weighted measures of multiplex networks, such as multistrength and the inverse multipa...
Maximum SINR Synchronization Strategies in Multiuser Filter Bank Schemes
Pecile Francesco
2010-01-01
Full Text Available We consider synchronization in a multiuser filter bank uplink system with single-user detection. Perfect user synchronization is not the optimal choice as the intuition would suggest. To maximize performance the synchronization parameters have to be chosen to maximize the signal-to-interference-plus-noise ratio (SINR at each equalizer subchannel output. However, the resulting filter bank receiver structure becomes complex. Therefore, we consider two simplified synchronization metrics that are based on the maximization of the average SINR of a given user or the aggregate SINR of all users. Furthermore, a relaxation of the aggregate SINR metric allows implementing an efficient multiuser analysis filter bank. This receiver deploys two fractionally spaced analysis stages. Each analysis stage is efficiently implemented via a polyphase filter bank, followed by an extended discrete Fourier transform that allows the user frequency offsets to be partly compensated. Then, sub-channel maximum SINR equalization is used. We discuss the application of the proposed solution to Orthogonal Frequency Division Multiple Access (OFDMA and multiuser Filtered Multitone (FMT systems.
tmle : An R Package for Targeted Maximum Likelihood Estimation
Susan Gruber
2012-11-01
Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
77 FR 76169 - Increase in Maximum Tuition and Fee Amounts Payable under the Post-9/11 GI Bill
2012-12-26
... AFFAIRS Increase in Maximum Tuition and Fee Amounts Payable under the Post-9/11 GI Bill AGENCY: Department... of the increase in the Post-9/11 GI Bill maximum tuition and fee amounts payable and the increase in.... SUPPLEMENTARY INFORMATION: For the 2011-2012 academic year, the Post-9/ 11 GI Bill allowed VA to pay the actual...
Vokřínek, Lukáš
2012-01-01
Let V be a cofibrantly generated closed symmetric monoidal model category and M a model V-category. We say that a weighted colimit W*D of a diagram D weighted by W is a homotopy weighted colimit if the diagram D is pointwise cofibrant and the weight W is cofibrant in the projective model structure on [C^op,V]. We then proceed to describe such homotopy weighted colimits through homotopy tensors and ordinary (conical) homotopy colimits. This is a homotopy version of the well known isomorphism W*D=\\int^C(W\\tensor D). After proving this homotopy decomposition in general we study in some detail a few special cases. For simplicial sets tensors may be replaced up to weak equivalence by conical homotopy colimits and thus the weighted homotopy colimits have no added value. The situation is completely different for model dg-categories where the desuspension cannot be constructed from conical homotopy colimits. In the last section we characterize those V-functors inducing a Quillen equivalence on the enriched presheaf c...
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Maximum likelihood based classification of electron tomographic data.
Stölken, Michael; Beck, Florian; Haller, Thomas; Hegerl, Reiner; Gutsche, Irina; Carazo, Jose-Maria; Baumeister, Wolfgang; Scheres, Sjors H W; Nickell, Stephan
2011-01-01
Classification and averaging of sub-tomograms can improve the fidelity and resolution of structures obtained by electron tomography. Here we present a three-dimensional (3D) maximum likelihood algorithm--MLTOMO--which is characterized by integrating 3D alignment and classification into a single, unified processing step. The novelty of our approach lies in the way we calculate the probability of observing an individual sub-tomogram for a given reference structure. We assume that the reference structure is affected by a 'compound wedge', resulting from the summation of many individual missing wedges in distinct orientations. The distance metric underlying our probability calculations effectively down-weights Fourier components that are observed less frequently. Simulations demonstrate that MLTOMO clearly outperforms the 'constrained correlation' approach and has advantages over existing approaches in cases where the sub-tomograms adopt preferred orientations. Application of our approach to cryo-electron tomographic data of ice-embedded thermosomes revealed distinct conformations that are in good agreement with results obtained by previous single particle studies.
27 CFR 20.24 - Allowance of claims.
2010-04-01
... OF THE TREASURY LIQUORS DISTRIBUTION AND USE OF DENATURED ALCOHOL AND RUM Administrative Provisions Authorities § 20.24 Allowance of claims. The appropriate TTB officer is authorized to allow claims for...
27 CFR 22.23 - Allowance of claims.
2010-04-01
... OF THE TREASURY LIQUORS DISTRIBUTION AND USE OF TAX-FREE ALCOHOL Administrative Provisions Authorities § 22.23 Allowance of claims. The appropriate TTB officer is authorized to allow claims for...
LI Dan-ling; SHEN Frank; YIN Yue; PENG Jun-xiang; CHEN Ping-yan
2013-01-01
Background Most indices for evaluating a diagnostic test can be expressed as functions of sensitivity (SEN) and specificity (SPE).Practically,all existing methods suffer from the inability to weight sensitivity and specificity relative to their importance.In this paper,we developed a novel index,the weighted Youden index,that allows Youden index to be a combination of sensitivity and specificity with user-defined weights.Methods The weighted Youden index Jw is defined as Jw=2(w×SEN+(1-w)SPE)-1 (0 ≤w ≤1).It has three properties:(1) the sum of the weights which are attached to sensitivity and specificity should be equal to 1; (2) the range of Jw should be within [-1,1],which is the range of the Youden index J; (3) Jv should be equal to J when sensitivity and specificity have equal weights.According to the central limit theorem,we obtain the standard error of Jw,and propose a statistical inference method to compare two weighted Youden indices.The monotonicity of the test statistic was discussed.Results An example of comparing two diagnostic tests for pheochromocytoma was used to demonstrate the weighted Youden index method.Weighted Youden index,the confidence interval for each test and the hypothesis test of comparing two independent diagnostic tests were presented.Assigning the weights is essential to the weighted Youden index approach.Conclusion The weighted Youden index can broaden its applications in diagnostic test development and motivate further research in weighting sensitivity and specificity explicitly.
Weight Loss Nutritional Supplements
Eckerson, Joan M.
Obesity has reached what may be considered epidemic proportions in the United States, not only for adults but for children. Because of the medical implications and health care costs associated with obesity, as well as the negative social and psychological impacts, many individuals turn to nonprescription nutritional weight loss supplements hoping for a quick fix, and the weight loss industry has responded by offering a variety of products that generates billions of dollars each year in sales. Most nutritional weight loss supplements are purported to work by increasing energy expenditure, modulating carbohydrate or fat metabolism, increasing satiety, inducing diuresis, or blocking fat absorption. To review the literally hundreds of nutritional weight loss supplements available on the market today is well beyond the scope of this chapter. Therefore, several of the most commonly used supplements were selected for critical review, and practical recommendations are provided based on the findings of well controlled, randomized clinical trials that examined their efficacy. In most cases, the nutritional supplements reviewed either elicited no meaningful effect or resulted in changes in body weight and composition that are similar to what occurs through a restricted diet and exercise program. Although there is some evidence to suggest that herbal forms of ephedrine, such as ma huang, combined with caffeine or caffeine and aspirin (i.e., ECA stack) is effective for inducing moderate weight loss in overweight adults, because of the recent ban on ephedra manufacturers must now use ephedra-free ingredients, such as bitter orange, which do not appear to be as effective. The dietary fiber, glucomannan, also appears to hold some promise as a possible treatment for weight loss, but other related forms of dietary fiber, including guar gum and psyllium, are ineffective.
2010-10-01
... low-income children in families with income from 101 to 150 percent of the FPL. 457.555 Section 457... low-income children in families with income from 101 to 150 percent of the FPL. (a) Non-institutional services. For targeted low-income children whose family income is from 101 to 150 percent of the FPL,...
胡吉磊; 徐建国; 姚耀明; 宋刚; 吴欣
2011-01-01
铝包钢绞线张力大、弧垂小,在输电线路大跨越设计中具有很好的应用前景,但是因为输送容量偏小,限制了其使用范围.通过试验验证,将铝包钢绞线的最高允许运行温度提高至130 0C,从而极大地增加了其输送能力,为今后大跨越线路设计提供了新的思路.
Perry, J. L.
2016-01-01
As the Space Station Freedom program transitioned to become the International Space Station (ISS), uncertainty existed concerning the performance capabilities for U.S.- and Russian-provided trace contaminant control (TCC) equipment. In preparation for the first dialogue between NASA and Russian Space Agency personnel in Moscow, Russia, in late April 1994, an engineering analysis was conducted to serve as a basis for discussing TCC equipment engineering assumptions as well as relevant assumptions on equipment offgassing and cabin air quality standards. The analysis presented was conducted as part of the efforts to integrate Russia into the ISS program via the early ISS Multilateral Medical Operations Panel's Air Quality Subgroup deliberations. This analysis, served as a basis for technical deliberations that established a framework for TCC system design and operations among the ISS program's international partners that has been instrumental in successfully managing the ISS common cabin environment.
In-medium dispersion relations of charmonia studied by the maximum entropy method
Ikeda, Atsuro; Asakawa, Masayuki; Kitazawa, Masakiyo
2017-01-01
We study in-medium spectral properties of charmonia in the vector and pseudoscalar channels at nonzero momenta on quenched lattices, especially focusing on their dispersion relation and the weight of the peak. We measure the lattice Euclidean correlation functions with nonzero momenta on the anisotropic quenched lattices and study the spectral functions with the maximum entropy method. The dispersion relations of charmonia and the momentum dependence of the weight of the peak are analyzed with the maximum entropy method together with the errors estimated probabilistically in this method. We find a significant increase of the masses of charmonia in medium. We also find that the functional form of the charmonium dispersion relations is not changed from that in the vacuum within the error even at T ≃1.6 Tc for all the channels we analyze.
In-medium dispersion relations of charmonia studied by maximum entropy method
Ikeda, Atsuro; Kitazawa, Masakiyo
2016-01-01
We study in-medium spectral properties of charmonia in the vector and pseudoscalar channels at nonzero momenta on quenched lattices, especially focusing on their dispersion relation and weight of the peak. We measure the lattice Euclidean correlation functions with nonzero momenta on the anisotropic quenched lattices and study the spectral functions with the maximum entropy method. The dispersion relations of charmonia and the momentum dependence of the weight of the peak are analyzed with the maximum entropy method together with the errors estimated probabilistically in this method. We find significant increase of the masses of charmonia in medium. It is also found that the functional form of the charmonium dispersion relations is not changed from that in the vacuum within the error even at $T\\simeq1.6T_c$ for all the channels we analyzed.
Energy Aware Scheduling for Weighted Completion Time and Weighted Tardiness
Carrasco, Rodrigo A; Stein, Cliff
2011-01-01
The ever increasing adoption of mobile devices with limited energy storage capacity, on the one hand, and more awareness of the environmental impact of massive data centres and server pools, on the other hand, have both led to an increased interest in energy management algorithms. The main contribution of this paper is to present several new constant factor approximation algorithms for energy aware scheduling problems where the objective is to minimize weighted completion time plus the cost of the energy consumed, in the one machine non-preemptive setting, while allowing release dates and deadlines.Unlike previous known algorithms these new algorithms can handle general job-dependent energy cost functions, extending the application of these algorithms to settings outside the typical CPU-energy one. These new settings include problems where in addition, or instead, of energy costs we also have maintenance costs, wear and tear, replacement costs, etc., which in general depend on the speed at which the machine r...
50 CFR 665.127 - Allowable gear and gear restrictions.
2010-10-01
... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Allowable gear and gear restrictions. 665... Fisheries § 665.127 Allowable gear and gear restrictions. (a) American Samoa coral reef ecosystem MUS may be taken only with the following allowable gear and methods: (1) Hand harvest; (2) Spear; (3) Slurp gun;...
50 CFR 665.427 - Allowable gear and gear restrictions.
2010-10-01
... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Allowable gear and gear restrictions. 665... Archipelago Fisheries § 665.427 Allowable gear and gear restrictions. (a) Mariana coral reef ecosystem MUS may be taken only with the following allowable gear and methods: (1) Hand harvest; (2) Spear; (3)...
50 CFR 665.227 - Allowable gear and gear restrictions.
2010-10-01
... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Allowable gear and gear restrictions. 665... Fisheries § 665.227 Allowable gear and gear restrictions. (a) Hawaii coral reef ecosystem MUS may be taken only with the following allowable gear and methods: (1) Hand harvest; (2) Spear; (3) Slurp gun;...
14 CFR 151.125 - Allowable advance planning costs.
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Allowable advance planning costs. 151.125... (CONTINUED) AIRPORTS FEDERAL AID TO AIRPORTS Rules and Procedures for Advance Planning and Engineering Proposals § 151.125 Allowable advance planning costs. (a) The United States' share of the allowable costs of...
24 CFR 891.785 - Adjustment of utility allowances.
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Adjustment of utility allowances... Handicapped Families and Individuals-Section 162 Assistance § 891.785 Adjustment of utility allowances. In... adjustment of utility allowances provided in § 891.440 apply....
24 CFR 891.440 - Adjustment of utility allowances.
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Adjustment of utility allowances... Project Management § 891.440 Adjustment of utility allowances. This section shall apply to projects funded... submit an analysis of any utility allowances applicable. Such data as changes in utility rates and...
24 CFR 886.326 - Adjustment of utility allowances.
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Adjustment of utility allowances... utility allowances. When the owner requests HUD approval of an adjustment in Contract Rents under § 886.312, an analysis of the project's Utility Allowances must be included. Such data as changes in...
24 CFR 880.610 - Adjustment of utility allowances.
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Adjustment of utility allowances... Management § 880.610 Adjustment of utility allowances. In connection with annual and special adjustments of contract rents, the owner must submit an analysis of the project's Utility Allowances. Such data as...
24 CFR 886.126 - Adjustment of utility allowances.
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Adjustment of utility allowances... utility allowances. When the owner requests HUD approval of adjustment in Contract Rents under § 886.112, an analysis of the project's Utility Allowances must be included. Such data as changes in...
24 CFR 884.220 - Adjustment of utility allowances.
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Adjustment of utility allowances... Adjustment of utility allowances. In connection with annual and special adjustments of contract rents, the owner must submit an analysis of the project's Utility Allowances. Such data as changes in utility...
45 CFR 1801.43 - Allowance for books.
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false Allowance for books. 1801.43 Section 1801.43... HARRY S. TRUMAN SCHOLARSHIP PROGRAM Payments to Finalists and Scholars § 1801.43 Allowance for books. The cost allowance for a Scholar's books is $1000 per year, or such higher amount published on...
20 CFR 606.2 - Total credits allowable.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Total credits allowable. 606.2 Section 606.2 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR TAX CREDITS UNDER THE... credits allowable. The total credits allowed to an employer subject to the tax imposed by section 3301 of...
46 CFR 154.412 - Cargo tank corrosion allowance.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Cargo tank corrosion allowance. 154.412 Section 154.412... Containment Systems § 154.412 Cargo tank corrosion allowance. A cargo tank must be designed with a corrosion...) carries a cargo that corrodes the tank material. Note: Corrosion allowance for independent tank type C...
46 CFR 54.25-5 - Corrosion allowance.
2010-10-01
... Construction With Carbon, Alloy, and Heat Treated Steels § 54.25-5 Corrosion allowance. The corrosion allowance must be as required in 46 CFR 54.01-35. ... 46 Shipping 2 2010-10-01 2010-10-01 false Corrosion allowance. 54.25-5 Section 54.25-5...
Miyaguchi, Kazuyoshi; Demura, Shinichi
2008-11-01
This study aimed to examine the relationships between muscle power output using the stretch-shortening cycle (SSC) and eccentric maximum strength under elbow flexion. Eighteen young adult males pulled up a constant light load (2 kg) by ballistic elbow flexion under the following two preliminary conditions: 1) the static relaxed muscle state (SR condition), and 2) using the SSC with countermovement (SSC condition).Muscle power was determined from the product of the pulling velocity and the load mass by a power measurement instrument that adopted the weight-loading method. We assumed the pulling velocity to be the subject's muscle power parameters as a matter of convenience, because we used a constant load. The following two parameters were selected in reference to a previous study: 1) peak velocity (m x s(-1)) (peak power) and 2) 0.1-second velocity during concentric contraction (m x s(-1)) (initial power). Eccentric maximum strength by elbow flexion was measured by a handheld dynamometer.Initial power produced in the SSC condition was significantly larger than that in the SR condition. Eccentric maximum strength showed a significant and high correlation (r = 0.70) with peak power in the SSC condition but not in the SR condition. Eccentric maximum strength showed insignificant correlations with initial power in both conditions. In conclusion, it was suggested that eccentric maximum strength is associated with peak power in the SSC condition, but the contribution of the eccentric maximum strength to the SSC potentiation (initial power) may be low.
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
A model for the dynamics of human weight cycling
Albert Goldbeter
2006-03-01
The resolution to lose weight by cognitive restraint of nutritional intake often leads to repeated bouts of weight loss and regain, a phenomenon known as weight cycling or ``yo-yo dieting”. A simple mathematical model for weight cycling is presented. The model is based on a feedback of psychological nature by which a subject decides to reduce dietary intake once a threshold weight is exceeded. The analysis of the model indicates that sustained oscillations in body weight occur in a parameter range bounded by critical values. Only outside this range can body weight reach a stable steady state. The model provides a theoretical framework that captures key facets of weight cycling and suggests ways to control the phenomenon. The view that weight cycling represents self-sustained oscillations has indeed specific implications. In dynamical terms, to bring weight cycling to an end, parameter values should change in such a way as to induce the transition of body weight from sustained oscillations around an unstable steady state to a stable steady state. Maintaining weight under a critical value should prevent weight cycling and allow body weight to stabilize below the oscillatory range.
Factors affecting the carbon allowance market in the US
Kim, Hyun Seok; Koo, Won W. [Center for Agricultural Policy and Trade Studies, Department of Agribusiness and Applied Economics, North Dakota State University, Dept 7610, P.O. Box 6050, Fargo, ND 58103-6050 (United States)
2010-04-15
The US carbon allowance market has different characteristic and price determination process from the EU ETS market, since emitting installations voluntarily participate in emission trading scheme. This paper examines factors affecting the US carbon allowance market. An autoregressive distributed lag model is used to examine the short- and long-run relationships between the US carbon allowance market and its determinant factors. In the long-run, the price of coal is a main factor in the determination of carbon allowance trading. In the short-run, on the other hand, the changes in crude oil and natural gas prices as well as coal price have significant effects on carbon allowance market. (author)
Understanding Peripheral Bat Populations Using Maximum-Entropy Suitability Modeling
Barnhart, Paul R.; Gillam, Erin H.
2016-01-01
Individuals along the periphery of a species distribution regularly encounter more challenging environmental and climatic conditions than conspecifics near the center of the distribution. Due to these potential constraints, individuals in peripheral margins are expected to change their habitat and behavioral characteristics. Managers typically rely on species distribution maps when developing adequate management practices. However, these range maps are often too simplistic and do not provide adequate information as to what fine-scale biotic and abiotic factors are driving a species occurrence. In the last decade, habitat suitability modelling has become widely used as a substitute for simplistic distribution mapping which allows regional managers the ability to fine-tune management resources. The objectives of this study were to use maximum-entropy modeling to produce habitat suitability models for seven species that have a peripheral margin intersecting the state of North Dakota, according to current IUCN distributions, and determine the vegetative and climatic characteristics driving these models. Mistnetting resulted in the documentation of five species outside the IUCN distribution in North Dakota, indicating that current range maps for North Dakota, and potentially the northern Great Plains, are in need of update. Maximum-entropy modeling showed that temperature and not precipitation were the variables most important for model production. This fine-scale result highlights the importance of habitat suitability modelling as this information cannot be extracted from distribution maps. Our results provide baseline information needed for future research about how and why individuals residing in the peripheral margins of a species’ distribution may show marked differences in habitat use as a result of urban expansion, habitat loss, and climate change compared to more centralized populations. PMID:27935936
lowest oxygen requirement in the first 12 hours (which are two components of the CRIB score), and maximum partial arterial carbon dioxide pressure (PaCDJ in the first 72 hours. ..... This stUdy also brings into question the appropriateness.
The directed flow maximum near cs = 0
Brachmann, J.; Dumitru, A.; Stöcker, H.; Greiner, W.
2000-07-01
We investigate the excitation function of quark-gluon plasma formation and of directed in-plane flow of nucleons in the energy range of the BNL-AGS and for the E {Lab/kin} = 40 AGeV Pb + Pb collisions performed recently at the CERN-SPS. We employ the three-fluid model with dynamical unification of kinetically equilibrated fluid elements. Within our model with first-order phase transition at high density, droplets of QGP coexisting with hadronic matter are produced already at BNL-AGS energies, E {Lab/kin} ≃ 10 AGeV. A substantial decrease of the isentropic velocity of sound, however, requires higher energies, E {Lab/kin} ≃ 0 AGeV. We show the effect on the flow of nucleons in the reaction plane. According to our model calculations, kinematic requirements and EoS effects work hand-in-hand at E {Lab/kin} = 40 AGeV to allow the observation of the dropping velocity of sound via an increase of the directed flow around midrapidity as compared to top BNL-AGS energy.
Light weight phosphate cements
Wagh, Arun S.; Natarajan, Ramkumar,; Kahn, David
2010-03-09
A sealant having a specific gravity in the range of from about 0.7 to about 1.6 for heavy oil and/or coal bed methane fields is disclosed. The sealant has a binder including an oxide or hydroxide of Al or of Fe and a phosphoric acid solution. The binder may have MgO or an oxide of Fe and/or an acid phosphate. The binder is present from about 20 to about 50% by weight of the sealant with a lightweight additive present in the range of from about 1 to about 10% by weight of said sealant, a filler, and water sufficient to provide chemically bound water present in the range of from about 9 to about 36% by weight of the sealant when set. A porous ceramic is also disclosed.
Family Weight School treatment
Nowicka, Paulina; Höglund, Peter; Pietrobelli, Angelo
2008-01-01
OBJECTIVE: The aim was to evaluate the efficacy of a Family Weight School treatment based on family therapy in group meetings with adolescents with a high degree of obesity. METHODS: Seventy-two obese adolescents aged 12-19 years old were referred to a childhood obesity center by pediatricians...... and school nurses and offered a Family Weight School therapy program in group meetings given by a multidisciplinary team. Intervention was compared with an untreated waiting list control group. Body mass index (BMI) and BMI z-scores were calculated before and after intervention. RESULTS: Ninety percent...... group with initial BMI z-score 3.5. CONCLUSIONS: Family Weight School treatment model might be suitable for adolescents with BMI z...
Weight Management in Phenylketonuria
Rocha, Julio César; van Rijn, Margreet; van Dam, Esther
2016-01-01
specialized clinic, the second objective is important in establishing an understanding of the breadth of overweight and obesity in PKU in Europe. KEY MESSAGES: In PKU, the importance of adopting a European nutritional management strategy on weight management is highlighted in order to optimize long-term....... It is becoming evident that in addition to acceptable blood phenylalanine control, metabolic dieticians should regard weight management as part of routine clinical practice. SUMMARY: It is important for practitioners to differentiate the 3 levels for overweight interpretation: anthropometry, body composition...... and frequency and severity of associated metabolic comorbidities. The main objectives of this review are to suggest proposals for the minimal standard and gold standard for the assessment of weight management in PKU. While the former aims to underline the importance of nutritional status evaluation in every...
Relate the earthquake parameters to the maximum tsunami runup
Sharghivand, Naeimeh; Kânoǧlu, Utku
2016-04-01
Considering the 1 September 1992 Nicaraguan tsunami manifested itself with an initial shoreline recession, there was paradigm shift from solitary wave to an N-wave (Tadepalli and Synolakis, 1994, Proc. R. Soc. A: Math. Phys. Eng. Sci., 445, 99-112) to define the initial waveform of tsunamis (Kanoglu et al., 2015, Phil. Trans. R. Soc. A, 373: 20140369). The N-wave initial waveform shows specific features, which might enhance maximum runup at a target coastline. Tadepalli & Synolakis (1994) showed that the leading depression N-wave (LEN) run up higher than its mirror image, the leading elevation N-wave (LEN). Later, Kanoglu et al. (2013, Proc. R. Soc. A: Math. Phys. Eng. Sci., 469, 20130015) considered two-dimensional propagation of a finite crest length N-wave over a flat bottom and showed that focusing effect of an N-wave in the direction of leading depression, which enhance the runup. Recently, Kanoglu (2016, EGU Abstract)'s preliminary results suggest that later waves could be higher on the leading depression side for an N-wave, i.e., sequencing defined by Okal and Synolakis (2016, Geophys. J. Int. 204, 719-735) is more pronounced on the leading depression side. Here, we consider submarine earthquakes and estimate the initial ocean surface profiles through Okada's formulation (1985, Bull. Seismol. Soc. Am. 75, 1135-1040). We parameterize earthquake source parameters, such as the length and the width of the fault, the focal depth, the rake (slip) and the dip angles, and the slip amount. Then, we relate ocean surface profiles calculated through Okada (1985) to the generalized N-wave profile defined by Tadepalli and Synolakis (1994) and identify N-wave parameters. Since, for an N-wave type initial condition, Tadepalli and Synolakis (1994) presented maximum runup for a canonical problem -wave propagating over a constant depth segment first and then over a sloping beach- and Kanoglu (2004, J. Fluid Mech., 513, 363-372) for a sloping beach their results allow us to
Exercise in weight management.
Pinto, B M; Szymanski, L
1997-11-01
Exercise is integral to successful weight loss and maintenance. When talking to patients about exercise, consider their readiness, and address the barriers that prevent exercise. Physicians can help those patients who already exercise by encouraging them to continue and helping them anticipate, and recover from, lapses. Providing resource material to patients on behavioral strategies for exercise adoption and weight management can supplement the physician's efforts. Overall, patients need to hear that any regular exercise, be it step-aerobics, walking, or taking the stairs, will benefit them.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
A polynomial algorithm for abstract maximum flow
McCormick, S.T. [Univ. of British Columbia, Vancouver, British Columbia (Canada)
1996-12-31
Ford and Fulkerson`s original 1956 max flow/min cut paper formulated max flow in terms of flows on paths, rather than the more familiar flows on arcs. In 1974 Hoffman pointed out that Ford and Fulkerson`s original proof was quite abstract, and applied to a wide range of max flow-like problems. In this abstract model we have capacitated elements, and linearly ordered subsets of elements called paths. When two paths share an element ({open_quote}cross{close_quote}), then there must be a path that is a subset of the first path up to the cross, and a subset of the second path after the cross. (Hoffman`s generalization of) Ford and Fulkerson`s proof showed that the max flow/min cut theorem still holds under this weak assumption. However, this proof is non-constructive. To get an algorithm, we assume that we have an oracle whose input is an arbitrary subset of elements, and whose output is either a path contained in that subset, or the statement that no such path exists. We then use complementary slackness to show how to augment any feasible set of path flows to a set with a strictly larger total flow value using a polynomial number of calls to the oracle. Then standard scaling techniques yield an overall polynomial algorithm for finding both a max flow and a min cut. Hoffman`s paper actually considers a sort of supermodular objective on the path flows, which allows him to include transportation problems and thus rain-cost flow in his frame-work. We also discuss extending our algorithm to this more general case.
Recommended Maximum Temperature For Mars Returned Samples
Beaty, D. W.; McSween, H. Y.; Czaja, A. D.; Goreva, Y. S.; Hausrath, E.; Herd, C. D. K.; Humayun, M.; McCubbin, F. M.; McLennan, S. M.; Hays, L. E.
2016-01-01
The Returned Sample Science Board (RSSB) was established in 2015 by NASA to provide expertise from the planetary sample community to the Mars 2020 Project. The RSSB's first task was to address the effect of heating during acquisition and storage of samples on scientific investigations that could be expected to be conducted if the samples are returned to Earth. Sample heating may cause changes that could ad-versely affect scientific investigations. Previous studies of temperature requirements for returned mar-tian samples fall within a wide range (-73 to 50 degrees Centigrade) and, for mission concepts that have a life detection component, the recommended threshold was less than or equal to -20 degrees Centigrade. The RSSB was asked by the Mars 2020 project to determine whether or not a temperature requirement was needed within the range of 30 to 70 degrees Centigrade. There are eight expected temperature regimes to which the samples could be exposed, from the moment that they are drilled until they are placed into a temperature-controlled environment on Earth. Two of those - heating during sample acquisition (drilling) and heating while cached on the Martian surface - potentially subject samples to the highest temperatures. The RSSB focused on the upper temperature limit that Mars samples should be allowed to reach. We considered 11 scientific investigations where thermal excursions may have an adverse effect on the science outcome. Those are: (T-1) organic geochemistry, (T-2) stable isotope geochemistry, (T-3) prevention of mineral hydration/dehydration and phase transformation, (T-4) retention of water, (T-5) characterization of amorphous materials, (T-6) putative Martian organisms, (T-7) oxidation/reduction reactions, (T-8) (sup 4) He thermochronometry, (T-9) radiometric dating using fission, cosmic-ray or solar-flare tracks, (T-10) analyses of trapped gasses, and (T-11) magnetic studies.
Weight and Diabetes (For Parents)
... help all kids maintain a healthy weight. For kids with diabetes, diet and exercise are even more important because ... weight is good for the entire family! When kids with diabetes reach and maintain a healthy weight, they feel ...
Overweight, Obesity, and Weight Loss
... Overweight, obesity, and weight loss fact sheet ePublications Overweight, obesity, and weight loss fact sheet Print this fact sheet Overweight, obesity, and weight loss fact sheet (full version) ( ...
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
Weighted exponential polynomial approximation
邓冠铁
2003-01-01
A necessary and sufficient condition for completeness of systems of exponentials with a weightin Lp is established and a quantitative relation between the weight and the system of exponential in Lp isobtained by using a generalization of Malliavin's uniqueness theorem about Watson's problem.
2005-05-01
Process 16 Prototype Hardware Testing and Results 17 Barrel Weight 17 Functional Testing 17 Barrel Deflection 18 Drop Test 19 Thermal Test 20 References 23...measurements were compliant. 19 Thermal Test As discussed in the Transient Analysis Model Verification section of this report, the analytical results from the
hayati
Efficiency of growth is a function of metabolisable energy retained relative to that which is .... distribution of other sexes in certain housing, initial weight or season categories ..... Fox, D.G., Johnson, R.R., Preston, R.L. & Dockerty, T.R., 1972.
Avakian, Harut [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Gamberg, Leonard [Pennsylvania State Univ., University Park, PA (United States); Rossi, Patrizia [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Prokudin, Alexei [Pennsylvania State Univ., University Park, PA (United States)
2016-05-01
We review the concept of Bessel weighted asymmetries for semi-inclusive deep inelastic scattering and focus on the cross section in Fourier space, conjugate to the outgoing hadron’s transverse momentum, where convolutions of transverse momentum dependent parton distribution functions and fragmentation functions become simple products. Individual asymmetric terms in the cross section can be projected out by means of a generalized set of weights involving Bessel functions. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy and hard scale Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.
Nieuwenhuijsen, Mark J; Northstone, Kate; Golding, Jean
2002-11-01
Swimmers can be exposed to high levels of trihalomethanes, byproducts of chlorination disinfection. There are no published studies on the relation between swimming and birth weight. We explored this relation in a large birth cohort, the Avon (England) Longitudinal Study of Parents and Children (ALSPAC), in 1991-1992. Information on the amount of swimming per week during the first 18-20 weeks of pregnancy was available for 11,462 pregnant women. Fifty-nine percent never swam, 31% swam up to 1 hour per week, and 10% swam for longer. We used linear regression to explore the relation between birth weight and the amount of swimming, with adjustment for gestational age, maternal age, parity, maternal education level, ethnicity, housing tenure, drug use, smoking and alcohol consumption. We found little effect of the amount of swimming on birth weight. More highly educated women were more likely to swim compared with less educated women, whereas smokers were less likely to swim compared with nonsmokers. There appears to be no relation between the duration of swimming and birth weight.
Weight lifting builds muscle, which increases overall body strength, tone, and balance. Muscles also burn calories more efficiently than fat and other body tissues. So even at rest the more muscle tissue a person has the more calories a person is ...
... Exercise is a key component of a healthy lifestyle before, during and after pregnancy. After pregnancy, most women can start exercising as ... the skinny jeans. Focus on living a healthy lifestyle, and the rest will fall into place. More tips ... or between pregnancies Nutrition, weight & fitness Prenatal ...
Graphs and matroids weighted in a bounded incline algebra.
Lu, Ling-Xia; Zhang, Bei
2014-01-01
Firstly, for a graph weighted in a bounded incline algebra (or called a dioid), a longest path problem (LPP, for short) is presented, which can be considered the uniform approach to the famous shortest path problem, the widest path problem, and the most reliable path problem. The solutions for LPP and related algorithms are given. Secondly, for a matroid weighted in a linear matroid, the maximum independent set problem is studied.
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
Comparison Between Bayesian and Maximum Entropy Analyses of Flow Networks†
Steven H. Waldrip
2017-02-01
Full Text Available We compare the application of Bayesian inference and the maximum entropy (MaxEnt method for the analysis of ﬂow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of ﬂow rates and other variables, when there is insufﬁcient information to obtain a deterministic solution, and also allow the effects of uncertainty to be included. Both methods of inference update a prior to a posterior probability density function (pdf by the inclusion of new information, in the form of data or constraints. The MaxEnt method maximises an entropy function subject to constraints, using the method of Lagrange multipliers,to give the posterior, while the Bayesian method ﬁnds its posterior by multiplying the prior with likelihood functions incorporating the measured data. In this study, we examine MaxEnt using soft constraints, either included in the prior or as probabilistic constraints, in addition to standard moment constraints. We show that when the prior is Gaussian,both Bayesian inference and the MaxEnt method with soft prior constraints give the same posterior means, but their covariances are different. In the Bayesian method, the interactions between variables are applied through the likelihood function, using second or higher-order cross-terms within the posterior pdf. In contrast, the MaxEnt method incorporates interactions between variables using Lagrange multipliers, avoiding second-order correlation terms in the posterior covariance. The MaxEnt method with soft prior constraints, therefore, has a numerical advantage over Bayesian inference, in that the covariance terms are avoided in its integrations. The second MaxEnt method with soft probabilistic constraints is shown to give posterior means of similar, but not identical, structure to the other two methods, due to its different formulation.
W. M. Macek
2011-05-01
Full Text Available To quantify solar wind turbulence, we consider a generalized two-scale weighted Cantor set with two different scales describing nonuniform distribution of the kinetic energy flux between cascading eddies of various sizes. We examine generalized dimensions and the corresponding multifractal singularity spectrum depending on one probability measure parameter and two rescaling parameters. In particular, we analyse time series of velocities of the slow speed streams of the solar wind measured in situ by Voyager 2 spacecraft in the outer heliosphere during solar maximum at various distances from the Sun: 10, 30, and 65 AU. This allows us to look at the evolution of multifractal intermittent scaling of the solar wind in the distant heliosphere. Namely, it appears that while the degree of multifractality for the solar wind during solar maximum is only weakly correlated with the heliospheric distance, but the multifractal spectrum could substantially be asymmetric in a very distant heliosphere beyond the planetary orbits. Therefore, one could expect that this scaling near the frontiers of the heliosphere should rather be asymmetric. It is worth noting that for the model with two different scaling parameters a better agreement with the solar wind data is obtained, especially for the negative index of the generalized dimensions. Therefore we argue that there is a need to use a two-scale cascade model. Hence we propose this model as a useful tool for analysis of intermittent turbulence in various environments and we hope that our general asymmetric multifractal model could shed more light on the nature of turbulence.
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…