Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
Achieve inventory reduction and improve customer service?
Moody, M C
2000-05-01
Is it really possible to achieve significant reductions in your manufacturing inventories while improving customer service? If you really want to achieve significant inventory reductions, focus on the root causes, and develop countermeasures and a work plan, to execute your countermeasures. Include measurements for recording your progress, and deploy your countermeasures until they are no longer required, or until new ones are needed.
A Family of Maximum SNR Filters for Noise Reduction
Huang, Gongping; Benesty, Jacob; Long, Tao;
2014-01-01
This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....
Are Reductions in Population Sodium Intake Achievable?
Jessica L. Levings
2014-10-01
Full Text Available The vast majority of Americans consume too much sodium, primarily from packaged and restaurant foods. The evidence linking sodium intake with direct health outcomes indicates a positive relationship between higher levels of sodium intake and cardiovascular disease risk, consistent with the relationship between sodium intake and blood pressure. Despite communication and educational efforts focused on lowering sodium intake over the last three decades data suggest average US sodium intake has remained remarkably elevated, leading some to argue that current sodium guidelines are unattainable. The IOM in 2010 recommended gradual reductions in the sodium content of packaged and restaurant foods as a primary strategy to reduce US sodium intake, and research since that time suggests gradual, downward shifts in mean population sodium intake are achievable and can move the population toward current sodium intake guidelines. The current paper reviews recent evidence indicating: (1 significant reductions in mean population sodium intake can be achieved with gradual sodium reduction in the food supply, (2 gradual sodium reduction in certain cases can be achieved without a noticeable change in taste or consumption of specific products, and (3 lowering mean population sodium intake can move us toward meeting the current individual guidelines for sodium intake.
On some method of the space elevator maximum stress reduction
Ambartsumian S. A.
2007-03-01
Full Text Available The possibility of the realization and exploitation of the space elevator project is connected with a number of complicated problems. One of them are large elastic stresses arising in the space elevator ribbon body, which are considerably bigger that the limit of strength of modern materials. This note is devoted to the solution of problem of maximum stress reduction in the ribbon by the modification of the ribbon cross-section area.
The optimal polarizations for achieving maximum contrast in radar images
Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.
1988-01-01
There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.
Adaptive speckle reduction of ultrasound images based on maximum likelihood estimation
Xu Liu(刘旭); Yongfeng Huang(黄永锋); Wende Shou(寿文德); Tao Ying(应涛)
2004-01-01
A method has been developed in this paper to gain effective speckle reduction in medical ultrasound images.To exploit full knowledge of the speckle distribution, here maximum likelihood was used to estimate speckle parameters corresponding to its statistical mode. Then the results were incorporated into the nonlinear anisotropic diffusion to achieve adaptive speckle reduction. Verified with simulated and ultrasound images,we show that this algorithm is capable of enhancing features of clinical interest and reduces speckle noise more efficiently than just applying classical filters. To avoid edge contribution, changes of contrast-to-noise ratio of different regions are also compared to investigate the performance of this approach.
Hardware implementation of antenna array system for maximum SLL reduction
Amr H. Hussein
2017-06-01
Full Text Available Side lobe level (SLL reduction has a great importance in recent communication systems. It is considered as one of the most important applications of digital beamforming since it reduces the effect of interference arriving outside the main lobe. This interference reduction increases the capacity of the communication systems. In this paper, the hardware implementation of an antenna array system for SLL reduction is introduced using microstrip technology. The proposed antenna array system consists of two main parts, the antenna array, and its feeding network. Power dividers play a vital role in various radio frequency and communication applications. A power divider can be utilized as a feeding network of an antenna array. For the synthesis of a radiation pattern, an unequal-split power divider is required. A new design for a four ports unequal circular sector power divider and its application to antenna array SLL reduction is introduced. The amplitude and phase of the signals emerging from each power divider branch are adjusted using stub and inset matching techniques. These matching techniques are used to adjust the branches impedances according to the desired power ratio. The design of the antenna array and the power divider are made using the software package CST MICROWAVE STUDIO. The power divider is realized on Rogers R03010 substrate with dielectric constant εr=10.2, loss tangent of 0.0035, and height h=1.28mm. In addition, a design for ultra-wide band (UWB antenna element and array are introduced. The antenna elements and the array are realized on the FR4 (lossy substrate with dielectric constant εr=4.5, loss tangent of 0.025, and height h=1.5mm. The fabrication is done using thin film technology and photolithographic technique. The experimental measurements are done using the vector network analyzer (VNA HP8719Es. Good agreement is found between the measurements and the simulation results.
Component Prioritization Schema for Achieving Maximum Time and Cost Benefits from Software Testing
Srivastava, Praveen Ranjan; Pareek, Deepak
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Defining the end of software testing represents crucial features of any software development project. A premature release will involve risks like undetected bugs, cost of fixing faults later, and discontented customers. Any software organization would want to achieve maximum possible benefits from software testing with minimum resources. Testing time and cost need to be optimized for achieving a competitive edge in the market. In this paper, we propose a schema, called the Component Prioritization Schema (CPS), to achieve an effective and uniform prioritization of the software components. This schema serves as an extension to the Non Homogenous Poisson Process based Cumulative Priority Model. We also introduce an approach for handling time-intensive versus cost-intensive projects.
Halpern, Mark
2011-01-01
This paper considers the achievable reduction in peak voltage across two driving terminals of an RC circuit when delivering charge using a stepped current waveform, comprising a chosen number of steps of equal duration, compared with using a constant current over the total duration. This work has application to the design of neurostimulators giving reduced peak electrode voltage when delivering a given electric charge over a given time duration. Exact solutions for the greatest possible peak voltage reduction using two and three steps are given. Furthermore, it is shown that the achievable peak voltage reduction, for any given number of steps is identical for simple series RC circuits and parallel RC circuits, for appropriate different values of RC. It is conjectured that the maximum peak voltage reduction cannot be improved using a more complicated RC circuit.
Ulrich, Clara; Vermard, Youen; Dolder, Paul J.
2017-01-01
. An objective method is suggested that provides an optimal set of fishing mortality within the range, minimizing the risk of total allowable catch mismatches among stocks captured within mixed fisheries, and addressing explicitly the trade-offs between the most and least productive stocks.......Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative...... ranges to combine long-term single-stock targets with flexible, short-term, mixed-fisheries management requirements applied to the main North Sea demersal stocks. It is shown that sustained fishing at the upper bound of the range may lead to unacceptable risks when technical interactions occur...
Park, Hyunbin; Sim, Minseob; Kim, Shiho
2015-06-01
We propose a way of achieving maximum power and power-transfer efficiency from thermoelectric generators by optimized selection of maximum-power-point-tracking (MPPT) circuits composed of a boost-cascaded-with-buck converter. We investigated the effect of switch resistance on the MPPT performance of thermoelectric generators. The on-resistances of the switches affect the decrease in the conversion gain and reduce the maximum output power obtainable. Although the incremental values of the switch resistances are small, the resulting difference in the maximum duty ratio between the input and output powers is significant. For an MPPT controller composed of a boost converter with a practical nonideal switch, we need to monitor the output power instead of the input power to track the maximum power point of the thermoelectric generator. We provide a design strategy for MPPT controllers by considering the compromise in which a decrease in switch resistance causes an increase in the parasitic capacitance of the switch.
GHG emission reductions and costs to achieve Kyoto target
无
2003-01-01
Emission projection and marginal abatement cost curves (MACs) are the central components of any assessment of future carbonmarket, such as CDM (clean development mechanism) potentials, carbon quota price etc. However, they are products of very complex,dynamic systems driven by forces like population growth, economic development, resource endowments, technology progress and so on. Themodeling approaches for emission projection and MACs evaluation were summarized, and some major models and their results were compared.Accordingly, reduction and cost requirements to achieve the Kyoto target were estimated. It is concluded that Annex I Parties' total reductionrequirements range from 503-1304 MtC with USA participation and decrease significantly to 140-612 MtC after USA' s withdrawal. Totalcosts vary from 21-77 BUSD with USA and from 5-36 BUSD without USA if only domestic reduction actions are taken. The costs wouldsharply reduce while considering the three flexible mechanisms defined in the Kyoto Protocol with domestic actions' share in the all mitigationstrategies drops to only 0-16%.
2010-07-01
... limitations. 63.55 Section 63.55 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR..., Sections 112(g) and 112(j) § 63.55 Maximum achievable control technology (MACT) determinations for affected...
Optimal Velocity to Achieve Maximum Power Output – Bench Press for Trained Footballers
Richard Billich
2015-03-01
Full Text Available Optimal Velocity to Achieve Maximum Power Output – Bench Press for Trained Footballers In today’s world of strength training there are many myths surrounding effective exercising with the least possible negative effect on one’s health. In this experiment we focus on the finding of a relationship between maximum output, used load and the velocity with which the exercise is performed. The main objective is to find the optimal speed of the exercise motion which would allow us to reach the maximum mechanic muscle output during a bench press exercise. This information could be beneficial to sporting coaches and recreational sportsmen alike in helping them improve the effectiveness of fast strength training. Fifteen football players of the FK Třinec football club participated in the experiment. The measurements were made with the use of 3D cinematic and dynamic analysis, both experimental methods. The research subjects participated in a strength test, in which the mechanic muscle output of 0, 10, 30, 50, 70, 90% and one repetition maximum (1RM was measured. The acquired result values and other required data were modified using Qualisys Track Manager and Visual 3D software (C-motion, Rockville, MD, USA. During the bench press exercise the maximum mechanic muscle output of the set of research subjects was reached at 75% of maximum exercise motion velocity. Optimální rychlost pohybu pro dosažení maxima výstupního výkonu – bench press u trénovaných fotbalistů Dnešní svět silového tréninku přináší řadu mýtů o tom, jak cvičit efektivně a zároveň s co nejmenším negativním vlivem na zdraví člověka. V tomto experimentu se zabýváme nalezením vztahu mezi maximálním výkonem, použitou zátěží a rychlostí. Hlavním úkolem je nalezení optimální rychlosti pohybu pro dosažení maximálního mechanického svalového výkonu při cvičení bench press, což pomůže nejenom trenérům, ale i rekreačním sportovc
Elasto-Inertial Turbulence: From Subcritical Turbulence to Maximum Drag Reduction
Dubief, Yves; Sid, Samir; Egan, Raphael; Terrapon, Vincent
2015-11-01
Elasto Inertial Turbulence (EIT) is a turbulence state found so far in polymer solutions. Upon the appropriate initial perturbation, an autonomous regeneration cycle emerges between polymer dynamics, pressure and velocity fluctuations. This cycle is best explained by the Poisson equation derived from viscoelastic flow models such as FENE-P (used in this study). This presentation provides an overview of the structure of EIT in 2D channel flows for Reynolds numbers ranging from Reτ = 10 to 100 and for 3D simulations up to Ret au = 300 . For flows below the Newtonian critical Reynolds number, EIT increases the drag. For higher Reynolds numbers, EIT is surmised to be the energetic bound of Maximum Drag Reduction (MDR), the asymptotic state of drag reduction in polymer solutions. The very existence of EIT at low Reynolds numbers (Reτ FNRS grant No.2.5020.11), the PRACE infrastructure, and the Vermont Advanced Computing Core.
Espino, Susana; Schenk, H Jochen
2011-01-01
The maximum specific hydraulic conductivity (k(max)) of a plant sample is a measure of the ability of a plants' vascular system to transport water and dissolved nutrients under optimum conditions. Precise measurements of k(max) are needed in comparative studies of hydraulic conductivity, as well as for measuring the formation and repair of xylem embolisms. Unstable measurements of k(max) are a common problem when measuring woody plant samples and it is commonly observed that k(max) declines from initially high values, especially when positive water pressure is used to flush out embolisms. This study was designed to test five hypotheses that could potentially explain declines in k(max) under positive pressure: (i) non-steady-state flow; (ii) swelling of pectin hydrogels in inter-vessel pit membranes; (iii) nucleation and coalescence of bubbles at constrictions in the xylem; (iv) physiological wounding responses; and (v) passive wounding responses, such as clogging of the xylem by debris. Prehydrated woody stems from Laurus nobilis (Lauraceae) and Encelia farinosa (Asteraceae) collected from plants grown in the Fullerton Arboretum in Southern California, were used to test these hypotheses using a xylem embolism meter (XYL'EM). Treatments included simultaneous measurements of stem inflow and outflow, enzyme inhibitors, stem-debarking, low water temperatures, different water degassing techniques, and varied concentrations of calcium, potassium, magnesium, and copper salts in aqueous measurement solutions. Stable measurements of k(max) were observed at concentrations of calcium, potassium, and magnesium salts high enough to suppress bubble coalescence, as well as with deionized water that was degassed using a membrane contactor under strong vacuum. Bubble formation and coalescence under positive pressure in the xylem therefore appear to be the main cause for declining k(max) values. Our findings suggest that degassing of water is essential for achieving stable and
Chronic sleep reduction, functioning at school and school achievement in preadolescents
Meijer, A.M.
2008-01-01
This study investigates the relationship between chronic sleep reduction, functioning at school and school achievement of boys and girls. To establish individual consequences of chronic sleep reduction (tiredness, sleepiness, loss of energy and emotional instability) the Chronic Sleep Reduction Ques
Chronic sleep reduction, functioning at school and school achievement in preadolescents
Meijer, A.M.
2008-01-01
This study investigates the relationship between chronic sleep reduction, functioning at school and school achievement of boys and girls. To establish individual consequences of chronic sleep reduction (tiredness, sleepiness, loss of energy and emotional instability) the Chronic Sleep Reduction
Maximum likelihood based multi-channel isotropic reverberation reduction for hearing aids
Kuklasiński, Adam; Doclo, Simon; Jensen, Søren Holdt;
2014-01-01
We propose a multi-channel Wiener filter for speech dereverberation in hearing aids. The proposed algorithm uses joint maximum likelihood estimation of the speech and late reverberation spectral variances, under the assumption that the late reverberant sound field is cylindrically isotropic....... The dereverberation performance of the algorithm is evaluated using computer simulations with realistic hearing aid microphone signals including head-related effects. The algorithm is shown to work well with signals reverberated both by synthetic and by measured room impulse responses, achieving improvements...
Maximum likelihood based multi-channel isotropic reverberation reduction for hearing aids
Kuklasiński, Adam; Doclo, Simon; Jensen, Søren Holdt
2014-01-01
We propose a multi-channel Wiener filter for speech dereverberation in hearing aids. The proposed algorithm uses joint maximum likelihood estimation of the speech and late reverberation spectral variances, under the assumption that the late reverberant sound field is cylindrically isotropic....... The dereverberation performance of the algorithm is evaluated using computer simulations with realistic hearing aid microphone signals including head-related effects. The algorithm is shown to work well with signals reverberated both by synthetic and by measured room impulse responses, achieving improvements...
Singh, Meenesh R.; Clark, Ezra L.; Bell, Alexis T.
2015-11-01
Thermodynamic, achievable, and realistic efficiency limits of solar-driven electrochemical conversion of water and carbon dioxide to fuels are investigated as functions of light-absorber composition and configuration, and catalyst composition. The maximum thermodynamic efficiency at 1-sun illumination for adiabatic electrochemical synthesis of various solar fuels is in the range of 32-42%. Single-, double-, and triple-junction light absorbers are found to be optimal for electrochemical load ranges of 0-0.9 V, 0.9-1.95 V, and 1.95-3.5 V, respectively. Achievable solar-to-fuel (STF) efficiencies are determined using ideal double- and triple-junction light absorbers and the electrochemical load curves for CO2 reduction on silver and copper cathodes, and water oxidation kinetics over iridium oxide. The maximum achievable STF efficiencies for synthesis gas (H2 and CO) and Hythane (H2 and CH4) are 18.4% and 20.3%, respectively. Whereas the realistic STF efficiency of photoelectrochemical cells (PECs) can be as low as 0.8%, tandem PECs and photovoltaic (PV)-electrolyzers can operate at 7.2% under identical operating conditions. We show that the composition and energy content of solar fuels can also be adjusted by tuning the band-gaps of triple-junction light absorbers and/or the ratio of catalyst-to-PV area, and that the synthesis of liquid products and C2H4 have high profitability indices.
Onset of effects of testosterone treatment and time span until maximum effects are achieved
Saad, Farid; Aversa, Antonio; Isidori, Andrea M; Zafalon, Livia; Zitzmann, Michael; Gooren, Louis
2011-01-01
Objective Testosterone has a spectrum of effects on the male organism. This review attempts to determine, from published studies, the time-course of the effects induced by testosterone replacement therapy from their first manifestation until maximum effects are attained. Design Literature data on testosterone replacement. Results Effects on sexual interest appear after 3 weeks plateauing at 6 weeks, with no further increments expected beyond. Changes in erections/ejaculations may require up to 6 months. Effects on quality of life manifest within 3–4 weeks, but maximum benefits take longer. Effects on depressive mood become detectable after 3–6 weeks with a maximum after 18–30 weeks. Effects on erythropoiesis are evident at 3 months, peaking at 9–12 months. Prostate-specific antigen and volume rise, marginally, plateauing at 12 months; further increase should be related to aging rather than therapy. Effects on lipids appear after 4 weeks, maximal after 6–12 months. Insulin sensitivity may improve within few days, but effects on glycemic control become evident only after 3–12 months. Changes in fat mass, lean body mass, and muscle strength occur within 12–16 weeks, stabilize at 6–12 months, but can marginally continue over years. Effects on inflammation occur within 3–12 weeks. Effects on bone are detectable already after 6 months while continuing at least for 3 years. Conclusion The time-course of the spectrum of effects of testosterone shows considerable variation, probably related to pharmacodynamics of the testosterone preparation. Genomic and non-genomic effects, androgen receptor polymorphism and intracellular steroid metabolism further contribute to such diversity. PMID:21753068
Curating NASA's Future Extraterrestrial Sample Collections: How Do We Achieve Maximum Proficiency?
McCubbin, Francis; Evans, Cynthia; Zeigler, Ryan; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael
2016-01-01
The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "... documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working towards a state of maximum proficiency.
Slip resistance of winter footwear on snow and ice measured using maximum achievable incline.
Hsu, Jennifer; Shaw, Robert; Novak, Alison; Li, Yue; Ormerod, Marcus; Newton, Rita; Dutta, Tilak; Fernie, Geoff
2016-05-01
Protective footwear is necessary for preventing injurious slips and falls in winter conditions. Valid methods for assessing footwear slip resistance on winter surfaces are needed in order to evaluate footwear and outsole designs. The purpose of this study was to utilise a method of testing winter footwear that was ecologically valid in terms of involving actual human testers walking on realistic winter surfaces to produce objective measures of slip resistance. During the experiment, eight participants tested six styles of footwear on wet ice, on dry ice, and on dry ice after walking over soft snow. Slip resistance was measured by determining the maximum incline angles participants were able to walk up and down in each footwear-surface combination. The results indicated that testing on a variety of surfaces is necessary for establishing winter footwear performance and that standard mechanical bench tests for footwear slip resistance do not adequately reflect actual performance. Practitioner Summary: Existing standardised methods for measuring footwear slip resistance lack validation on winter surfaces. By determining the maximum inclines participants could walk up and down slopes of wet ice, dry ice, and ice with snow, in a range of footwear, an ecologically valid test for measuring winter footwear performance was established.
OPTIMIZED FUEL INJECTOR DESIGN FOR MAXIMUM IN-FURNACE NOx REDUCTION AND MINIMUM UNBURNED CARBON
SAROFIM, A F; LISAUSKAS, R; RILEY, D; EDDINGS, E G; BROUWER, J; KLEWICKI, J P; DAVIS, K A; BOCKELIE, M J; HEAP, M P; PERSHING, D
1998-01-01
Reaction Engineering International (REI) has established a project team of experts to develop a technology for combustion systems which will minimize NO x emissions and minimize carbon in the fly ash. This much need technology will allow users to meet environmental compliance and produce a saleable by-product. This study is concerned with the NO x control technology of choice for pulverized coal fired boilers,"in-furnace NO_{x} control," which includes: staged low-NO_{x} burners, reburning, selective non-catalytic reduction (SNCR) and hybrid approaches (e.g., reburning with SNCR). The program has two primary objectives: 1) To improve the performance of "in-furnace" NO_{x} control, processes. 2) To devise new, or improve existing, approaches for maximum "in-furnace" NO_{x} control and minimum unburned carbon. The program involves: 1) fundamental studies at laboratory- and bench-scale to define NO reduction mechanisms in flames and reburning jets; 2) laboratory experiments and computer modeling to improve our two-phase mixing predictive capability; 3) evaluation of commercial low-NO_{x} burner fuel injectors to develop improved designs, and 4) demonstration of coal injectors for reburning and low-NO_{x} burners at commercial scale. The specific objectives of the two-phase program are to: 1 Conduct research to better understand the interaction of heterogeneous chemistry and two phase mixing on NO reduction processes in pulverized coal combustion. 2 Improve our ability to predict combusting coal jets by verifying two phase mixing models under conditions that simulate the near field of low-NO_{x} burners. 3 Determine the limits on NO control by in-furnace NO_{x} control technologies as a function of furnace design and coal type. 5 Develop and demonstrate improved coal injector designs for commercial low-NO_{x} burners and coal reburning systems. 6 Modify the char burnout model in REI's coal
Reductions in transformer losses achieved by staggering lamination layers
Albir, R. S.; Moses, A. J.
1989-05-01
The total loss of identical 3-phase, 3-limb, mitred and staggered cores assembled from 0.3 mm thick, conventional high permeability and laser scribed grain oriented silicon iron have been compared. The croes built from conventional material produced the best improvements when staggered and these were chosen to carry out further investigation to examine the effect of the stacking number and the T-joint design on the power loss of the cores. The power loss generally increased as the stagger length was increased, but an optimum stagger length range was determined at which the power loss was lowest. The percentage improvement in the power loss due to the introduction of the staggered technique is dependent upon the orientation of the material and the T-joint design. The best loss reduction compared to a mitred core of the same rating was around 5% using a core assembled from conventional material.
Curating NASA's future extraterrestrial sample collections: How do we achieve maximum proficiency?
McCubbin, Francis; Evans, Cynthia; Allton, Judith; Fries, Marc; Righter, Kevin; Zolensky, Michael; Zeigler, Ryan
2016-07-01
Introduction: The Astromaterials Acquisition and Curation Office (henceforth referred to herein as NASA Curation Office) at NASA Johnson Space Center (JSC) is responsible for curating all of NASA's extraterrestrial samples. Under the governing document, NASA Policy Directive (NPD) 7100.10E "Curation of Extraterrestrial Materials", JSC is charged with "The curation of all extraterrestrial material under NASA control, including future NASA missions." The Directive goes on to define Curation as including "…documentation, preservation, preparation, and distribution of samples for research, education, and public outreach." Here we describe some of the ongoing efforts to ensure that the future activities of the NASA Curation Office are working to-wards a state of maximum proficiency. Founding Principle: Curatorial activities began at JSC (Manned Spacecraft Center before 1973) as soon as design and construction planning for the Lunar Receiving Laboratory (LRL) began in 1964 [1], not with the return of the Apollo samples in 1969, nor with the completion of the LRL in 1967. This practice has since proven that curation begins as soon as a sample return mission is conceived, and this founding principle continues to return dividends today [e.g., 2]. The Next Decade: Part of the curation process is planning for the future, and we refer to these planning efforts as "advanced curation" [3]. Advanced Curation is tasked with developing procedures, technology, and data sets necessary for curating new types of collections as envisioned by NASA exploration goals. We are (and have been) planning for future curation, including cold curation, extended curation of ices and volatiles, curation of samples with special chemical considerations such as perchlorate-rich samples, curation of organically- and biologically-sensitive samples, and the use of minimally invasive analytical techniques (e.g., micro-CT, [4]) to characterize samples. These efforts will be useful for Mars Sample Return
Winijkul, E.; Bond, T. C.
2011-12-01
In the residential sector, major activities that generate emissions are cooking and heating, and fuels ranging from traditional (wood) to modern (natural gas, or electricity) are used. Direct air pollutant emissions from this sector are low when natural gas or electricity are the dominant energy sources, as is the case in developed countries. However, in developing countries, people may rely on solid fuels and this sector can contribute a large fraction of emissions. The magnitude of the health loss associated with exposure to indoor smoke as well as its concentration among rural population in developing countries have recently put preventive measures high on the agenda of international development and public health organizations. This study focuses on these developing regions: Central America, Africa, and Asia. Current and future emissions from the residential sector depend on both fuel and cooking device (stove) type. Availability of fuels, stoves, and interventions depends strongly on spatial distribution. However, regional emission calculations do not consider this spatial dependence. Fuel consumption data is presented at country level, without information about where different types of fuel are used. Moreover, information about stove types that are currently used and can be used in the future is not available. In this study, we first spatially allocate current emissions within residential sector. We use Geographic Information System maps of temperature, electricity availability, forest area, and population to determine the distribution of fuel types and availability of stoves. Within each country, consumption of different fuel types, such as fuelwood, coal, and LPG is distributed among different area types (urban, peri-urban, and rural area). Then, the cleanest stove technologies which could be used in the area are selected based on the constraints of each area, i.e. availability of resources. Using this map, the maximum emission reduction compared with
Djeison Cesar Batista
2011-09-01
Full Text Available Thermal rectification of wood was developed in the decade of 1940 and has been largely studied and produced in Europe. In Brazil, the research about this technique is still little and sparse, but it has gained attention nowadays. The aim of this study was to evaluate the influence of time and temperature of rectification on the reduction of maximum swelling of Eucalyptus grandis wood. According to the results obtained it is possible to achieve reductions of about 50% on the maximum volumetric swelling of Eucalyptus grandis wood. Best results were obtained for 230°C of thermal rectification rather than 200°C. The factor temperature was more significant than time, once that there was no significant difference between the times used (1, 2 and 3 hours. There was no significant interaction between the factors time and temperature.
Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za
2014-05-01
near the middle of the core, while increasing the power density near the top and bottom of the core. This resulted in a huge reduction in the maximum DLOFC temperature from 1581.0 °C to 1297.6 °C, which may produce far reaching safety and economic benefits. However, it came at the cost of a 22% reduction in the average burn-up of the fuel. In a separate optimisation attempt a much smaller, but still significant, reduction in the maximum equilibrium temperature, from 1023 °C down to 988 °C, was achieved.
Gupta, N. K.; Mehra, R. K.
1974-01-01
This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.
Mansour, F. A.; Nizam, M.; Anwar, M.
2017-02-01
This research aims to predict the optimum surface orientation angles in solar panel installation to achieve maximum solar radiation. Incident solar radiation is calculated using koronakis mathematical model. Particle Swarm Optimization (PSO) is used as computational method to find optimum angle orientation for solar panel installation in order to get maximum solar radiation. A series of simulation has been carried out to calculate solar radiation based on monthly, seasonally, semi-yearly and yearly period. South-facing was calculated also as comparison of proposed method. South-facing considers azimuth of 0°. Proposed method attains higher incident predictions than South-facing that recorded 2511.03 kWh/m2for monthly. It were about 2486.49 kWh/m2, 2482.13 kWh/m2and 2367.68 kWh/m2 for seasonally, semi-yearly and yearly. South-facing predicted approximately 2496.89 kWh/m2, 2472.40 kWh/m2, 2468.96 kWh/m2, 2356.09 kWh/m2for monthly, seasonally, semi-yearly and yearly periods respectively. Semi-yearly is the best choice because it needs twice adjustments of solar panel in a year. Yet it considers inefficient to adjust solar panel position in every season or monthly with no significant solar radiation increase than semi-yearly and solar tracking device still considers costly in solar energy system. PSO was able to predict accurately with simple concept, easy and computationally efficient. It has been proven by finding the best fitness faster.
Reduction of Maximum and Residual Drifts on Posttensioned Steel Frames with Semirigid Connections
Arturo López-Barraza
2013-01-01
Full Text Available The aim of this paper is to study the seismic performance of self-centering moment-resisting steel frames with posttensioned connections taking into account nonlinear material behavior, for better understanding of the advantages of this type of structural system. Further, the seismic performance of traditional structures with rigid connections is compared with the corresponding equivalent posttensioned structures with semirigid connections. Nonlinear time history analyses are developed for both types of structural systems to obtain the maximum and the residual interstory drifts. Thirty long-duration narrow-banded earthquake ground motions recorded on soft soil sites of Mexico City are used for the analyses. It is concluded that the structural response of steel buildings with posttensioned connections subjected to intense earthquake ground motions is reduced compared with the seismic response of traditional buildings with welded connections. Moreover, residual interstory drift demands are considerably reduced for the system with posttensioned connections, which is important to avoid the demolition of the buildings after an earthquake.
无
2010-01-01
A new noise reduction method for nonlinear signal based on maximum variance unfolding(MVU)is proposed.The noisy sig- nal is firstly embedded into a high-dimensional phase space based on phase space reconstruction theory,and then the manifold learning algorithm MVU is used to perform nonlinear dimensionality reduction on the data of phase space in order to separate low-dimensional manifold representing the attractor from noise subspace.Finally,the noise-reduced signal is obtained through reconstructing the low-dimensional manifold.The simulation results of Lorenz system show that the proposed MVU-based noise reduction method outperforms the KPCA-based method and has the advantages of simple parameter estimation and low parameter sensitivity.The proposed method is applied to fault detection of a vibration signal from rotor-stator of aero engine with slight rubbing fault.The denoised results show that the slight rubbing features overwhelmed by noise can be effectively extracted by the proposed noise reduction method.
Achieving 80% greenhouse gas reduction target in Saudi Arabia under low and medium oil prices
Alshammari, Yousef M.
2016-11-10
COP 21 led to a global agreement to limit the earth\\'s rising temperature to less than 2 °C. This will require countries to act upon climate change and achieve a significant reduction in their greenhouse gas emissions which will play a pivotal role in shaping future energy systems. Saudi Arabia is the World\\'s largest exporter of crude oil, and the 11th largest CO2 emitter. Understanding the Kingdom\\'s role in global greenhouse gas reduction is critical in shaping the future of fossil fuels. Hence, this work presents an optimisation study to understand how Saudi Arabia can meet the CO2 reduction targets to achieve the 80% reduction in the power generation sector. It is found that the implementation of energy efficiency measures is necessary to enable meeting the 80% target, and it would also lower costs of transition to low carbon energy system while maintaining cleaner use of hydrocarbons with CCS. Setting very deep GHG reduction targets may be economically uncompetitive in consideration of the energy supply requirements. In addition, we determine the breakeven price of crude oil needed to make CCS economically viable. Results show important dimension for pricing CO2 and the role of CCS compared with alternative sources of energy.
G. Ramstein
2007-06-01
Full Text Available The Last Glacial Maximum has been one of the first foci of the Paleoclimate Modelling Intercomparison Project (PMIP. During its first phase, the results of 17 atmosphere general circulation models were compared to paleoclimate reconstructions. One of the largest discrepancies in the simulations was the systematic underestimation, by at least 10°C, of the winter cooling over Europe and the Mediterranean region observed in the pollen-based reconstructions. In this paper, we investigate the progress achieved to reduce this inconsistency through a large modelling effort and improved temperature reconstructions. We show that increased model spatial resolution does not significantly increase the simulated LGM winter cooling. Further, neither the inclusion of a vegetation cover compatible with the LGM climate, nor the interactions with the oceans simulated by the atmosphere-ocean general circulation models run in the second phase of PMIP result in a better agreement between models and data. Accounting for changes in interannual variability in the interpretation of the pollen data does not result in a reduction of the reconstructed cooling. The largest recent improvement in the model-data comparison has instead arisen from a new climate reconstruction based on inverse vegetation modelling, which explicitly accounts for the CO_{2} decrease at LGM and which substantially reduces the LGM winter cooling reconstructed from pollen assemblages. As a result, the simulated and observed LGM winter cooling over Western Europe and the Mediterranean area are now in much better agreement.
Ahmad Jabir Rahyussalim
2016-01-01
Full Text Available Adult scoliosis is defined as a spinal deformity in a skeletally mature patient with a Cobb angle of more than 10 degrees in the coronal plain. Posterior-only approach with rod and screw corrective manipulation to add strength of contra bending manipulation has correction achievement similar to that obtained by conventional combined anterior release and posterior approach. It also avoids the complications related to the thoracic approach. We reported a case of 25-year-old male adult idiopathic scoliosis with double curve. It consists of main thoracic curve of 150 degrees and lumbar curve of 89 degrees. His curve underwent direct contra bending posterior approach using rod and screw corrective manipulation technique to achieve optimal correction. After surgery the main thoracic Cobb angle becomes 83 degrees and lumbar Cobb angle becomes 40 degrees, with 5 days length of stay and less than 800 mL blood loss during surgery. There is no complaint at two months after surgery; he has already come back to normal activity with good functional activity.
Weerts, B. A.; Gallaher, D.; Weaver, R.; Van Geet, O.
2012-01-01
The Green Data Center Project was a successful effort to significantly reduce the energy use of the National Snow and Ice Data Center (NSIDC). Through a full retrofit of a traditional air conditioning system, the cooling energy required to meet the data center's constant load has been reduced by over 70% for summer months and over 90% for cooler winter months. This significant change is achievable through the use of airside economization and a new indirect evaporative cooling system. One of the goals of this project was to create awareness of simple and effective energy reduction strategies for data centers. This project's geographic location allowed maximizing the positive effects of airside economization and indirect evaporative cooling, but these strategies may also be relevant for many other sites and data centers in the U.S.
Achieving Realistic Energy and Greenhouse Gas Emission Reductions in U.S. Cities
Blackhurst, Michael F.
2011-12-01
In recognizing that energy markets and greenhouse gas emissions are significantly influences by local factors, this research examines opportunities for achieving realistic energy greenhouse gas emissions from U.S. cities through provisions of more sustainable infrastructure. Greenhouse gas reduction opportunities are examined through the lens of a public program administrator charged with reducing emissions given realistic financial constraints and authority over emissions reductions and energy use. Opportunities are evaluated with respect to traditional public policy metrics, such as benefit-cost analysis, net benefit analysis, and cost-effectiveness. Section 2 summarizes current practices used to estimate greenhouse gas emissions from communities. I identify improved and alternative emissions inventory techniques such as disaggregating the sectors reported, reporting inventory uncertainty, and aligning inventories with local organizations that could facilitate emissions mitigation. The potential advantages and challenges of supplementing inventories with comparative benchmarks are also discussed. Finally, I highlight the need to integrate growth (population and economic) and business as usual implications (such as changes to electricity supply grids) into climate action planning. I demonstrate how these techniques could improve decision making when planning reductions, help communities set meaningful emission reduction targets, and facilitate CAP implementation and progress monitoring. Section 3 evaluates the costs and benefits of building energy efficiency are estimated as a means of reducing greenhouse gas emissions in Pittsburgh, PA and Austin, TX. Two policy objectives were evaluated: maximize GHG reductions given initial budget constraints or maximize social savings given target GHG reductions. This approach explicitly evaluates the trade-offs between three primary and often conflicting program design parameters: initial capital constraints, social savings
The role of anthropogenic aerosol emission reduction in achieving the Paris Agreement's objective
Hienola, Anca; Pietikäinen, Joni-Pekka; O'Donnell, Declan; Partanen, Antti-Ilari; Korhonen, Hannele; Laaksonen, Ari
2017-04-01
The Paris agreement reached in December 2015 under the auspices of the United Nation Framework Convention on Climate Change (UNFCCC) aims at holding the global temperature increase to well below 2◦C above preindustrial levels and "to pursue efforts to limit the temperature increase to 1.5◦C above preindustrial levels". Limiting warming to any level implies that the total amount of carbon dioxide (CO2) - the dominant driver of long-term temperatures - that can ever be emitted into the atmosphere is finite. Essentially, this means that global CO2 emissions need to become net zero. CO2 is not the only pollutant causing warming, although it is the most persistent. Short-lived, non-CO2 climate forcers also must also be considered. Whereas much effort has been put into defining a threshold for temperature increase and zero net carbon emissions, surprisingly little attention has been paid to the non-CO2 climate forcers, including not just the non-CO2 greenhouse gases (methane (CH4), nitrous oxide (N2O), halocarbons etc.) but also the anthropogenic aerosols like black carbon (BC), organic carbon (OC) and sulfate. This study investigates the possibility of limiting the temperature increase to 1.5◦C by the end of the century under different future scenarios of anthropogenic aerosol emissions simulated with the very simplistic MAGICC climate carbon cycle model as well as with ECHAM6.1-HAM2.2-SALSA + UVic ESCM. The simulations include two different CO2 scenarios- RCP3PD as control and a CO2 reduction leading to 1.5◦C (which translates into reaching the net zero CO2 emissions by mid 2040s followed by negative emissions by the end of the century); each CO2 scenario includes also two aerosol pollution control cases denoted with CLE (current legislation) and MFR (maximum feasible reduction). The main result of the above scenarios is that the stronger the anthropogenic aerosol emission reduction is, the more significant the temperature increase by 2100 relative to pre
Li, Yichong; Zeng, Xinying; Liu, Jiangmei; Liu, Yunning; Liu, Shiwei; Yin, Peng; Qi, Jinlei; Zhao, Zhenping; Yu, Shicheng; Hu, Yuehua; He, Guangxue; Lopez, Alan D; Gao, George F; Wang, Linhong; Zhou, Maigeng
2017-07-11
The United Nation's Sustainable Development Goals for 2030 include reducing premature mortality from non-communicable diseases (NCDs) by one third. To assess the feasibility of this goal in China, we projected premature mortality in 2030 of NCDs under different risk factor reduction scenarios. We used China results from the Global Burden of Disease Study 2013 as empirical data for projections. Deaths between 1990 and 2013 for cardiovascular disease (CVD), diabetes, chronic respiratory disease, cancer, and other NCDs were extracted, along with population numbers. We disaggregated deaths into parts attributable and unattributable to high systolic blood pressure (SBP), smoking, high body mass index (BMI), high total cholesterol, physical inactivity, and high fasting glucose. Risk factor exposure and deaths by NCD category were projected to 2030. Eight simulated scenarios were also constructed to explore how premature mortality will be affected if the World Health Organization's targets for risk factors reduction are achieved by 2030. If current trends for each risk factor continued to 2030, the total premature deaths from NCDs would increase from 3.11 million to 3.52 million, but the premature mortality rate would decrease by 13.1%. In the combined scenario in which all risk factor reduction targets are achieved, nearly one million deaths among persons 30 to 70 years old due to NCDs would be avoided, and the one-third reduction goal would be achieved for all NCDs combined. More specifically, the goal would be achieved for CVD and chronic respiratory diseases, but not for cancer and diabetes. Reduction in the prevalence of high SBP, smoking, and high BMI played an important role in achieving the goals. Reaching the goal of a one-third reduction in premature mortality from NCDs is possible by 2030 if certain targets for risk factor intervention are reached, but more efforts are required to achieve risk factor reduction.
NONE
2006-10-15
While sustainable development in climate change was the core approach in SSN 1, SSN 2 takes a further and direct focus on poverty reduction, as a core theme. Presented by the SSN Capacity Building Team, this Module on Poverty Reduction reflects our current approach to dealing with poverty reduction. Each SSN 2 programme is discussed separately. The SSN Matrix Tool of Indicators for Appraising the Sustainable Development of Projects, from SSN 1, is applied in SSN 2 for assessing poverty reduction by placing special emphasis on a couple of social sustainability indicators. This approach of the Mitigation Programme is followed in the Adaptation Programme. The Adaptation Programme also applies the SSN Adaptation Projects Protocol for Community Based Adaptation. This SSNAPP for CBA is a way to find the hotspots where high levels of poverty and predicted increases in climate impacts coincide. The Technology Receptivity Programme examines approaches for receiving technology in poor communities, examining not only technology hardware but also the software (processes) and orgware (institutions) required. The Capacity Building Programme uses a SWOT tool for analysing a project's strengths, weaknesses, opportunities and threats as a way to determine and ensure the sustainability of projects in terms of technology, finances and social factors. The Module gives the various tools applied by the programmes, with examples from SSN projects. It is presented by the Capacity Building Programme in this format as a movement towards an alignment of approaches within SSN and is shared for use by others who are interested in the pursuit of sustainable projects. As a work in progress this Module will be updated as work goes on.
Mitchell, Douglas E.; Mitchell, Ross E.
This report presents a comprehensive preliminary analysis of how California's Class Size Reduction (CSR) initiative has impacted student achievement during the first 2 years of implementation. The analysis is based on complete student, classroom, and teacher records from 26,126 students in 1,174 classrooms from 83 schools in 8 Southern California…
Woldehanna, T.; Jones, N.; Bezuayehu, T.O.
2005-01-01
The major development objectives of the Ethiopian Government are to reduce poverty and improve primary school enrolment and educational achievement (SDPRP, 2002). However, education performance indicators show that only access¿related targets have been achieved, while educational quality declined in
Høigaard, Rune; Ommundsen, Yngvar
2007-06-01
This study investigated the relationship between motivational climates, personal achievement goals, and three different aspects of social loafing in football (soccer). 170 male competitive football players completed questionnaires assessing perceived motivational climate, achievement goal, and measures of perceived social loafing (anticipation of lower effort amongst their teammates and themselves). The results indicated a marginal but significant positive relationship between an ego-oriented achievement goal and perceived social loafing. In addition, a mastery climate was negatively associated with perceived social loafing and anticipation of lower effort of team members, particularly for athletes who also strongly endorsed a task-oriented achievement goal. A performance climate, in contrast, related positively with these two aspects of social loafing. A mastery climate also related negatively to the third aspect of social loafing, i.e., players' readiness to reduce their own effort in response to their perception of social loafing among their teammates.
Salunke, Pravin; Sahoo, Sushanta K; Deepak, Arsikere N; Ghuman, Mandeep S; Khandelwal, Niranjan K
2015-09-01
The cause of irreducibility in irreducible atlantoaxial dislocation (AAD) appears to be the orientation of the C1-2 facets. The current management strategies for irreducible AAD are directed at removing the cause of irreducibility followed by fusion, rather than transoral decompression and posterior fusion. The technique described in this paper addresses C1-2 facet mobilization by facetectomies to aid intraoperative manipulation. Using this technique, reduction was achieved in 19 patients with congenital irreducible AAD treated between January 2011 and December 2013. The C1-2 joints were studied preoperatively, and particular attention was paid to the facet orientation. Intraoperatively, oblique C1-2 joints were opened widely, and extensive drilling of the facets was performed to make them close to flat and parallel to each other, converting an irreducible AAD to a reducible one. Anomalous vertebral arteries (VAs) were addressed appropriately. Further reduction was then achieved after vertical distraction and joint manipulation. Adequate facet drilling was achieved in all but 2 patients, due to VA injury in 1 patient and an acute sagittal angle operated on 2 years previously in the other patient. Complete reduction could be achieved in 17 patients and partial in the remaining 2. All patients showed clinical improvement. Two patients showed partial redislocation due to graft subsidence. The fusion rates were excellent. Comprehensive drilling of the C1-2 facets appears to be a logical and effective technique for achieving direct posterior reduction in irreducible AAD. The extensive drilling makes large surfaces raw, increasing fusion rates.
Cathodic biofilm activates electrode surface and achieves efficient autotrophic sulfate reduction
Pozo, Guillermo; Jourdin, Ludovic; Lu, Yang; Keller, Jürg; Ledezma, Pablo; Freguia, Stefano
2016-01-01
Recent evidence suggests that autotrophic sulfate reduction could be driven by direct and indirect electron transfer mechanisms in bioelectrochemical systems. However, much uncertainty still exists about the electron fluxes from the electrode to the final electron acceptor sulfate during autotrop
The role of poverty reduction strategies in achieving the millennium development goals
Bezemer, Dirk; Eggen, Andrea
2008-01-01
We provide a literature overview of the linkages between Poverty Reduction Strategy Papers (PRSPs) and the Millenium Development Goals (MDGs) and use novel data to examine their relation. We find that introduction of a PRSP is associated with progress in four of the nine MDG indicators we study. PRS
Pan, Shu-Yuan; Chiang, Pen-Chi; Chen, Yi-Hung; Chen, Chun-Da; Lin, Hsun-Yu; Chang, E-E
2013-01-01
Accelerated carbonation of basic oxygen furnace slag (BOFS) coupled with cold-rolling wastewater (CRW) was performed in a rotating packed bed (RPB) as a promising process for both CO2 fixation and wastewater treatment. The maximum achievable capture capacity (MACC) via leaching and carbonation processes for BOFS in an RPB was systematically determined throughout this study. The leaching behavior of various metal ions from the BOFS into the CRW was investigated by a kinetic model. In addition, quantitative X-ray diffraction (QXRD) using the Rietveld method was carried out to determine the process chemistry of carbonation of BOFS with CRW in an RPB. According to the QXRD results, the major mineral phases reacting with CO2 in BOFS were Ca(OH)2, Ca2(HSiO4)(OH), CaSiO3, and Ca2Fe1.04Al0.986O5. Meanwhile, the carbonation product was identified as calcite according to the observations of SEM, XEDS, and mappings. Furthermore, the MACC of the lab-scale RPB process was determined by balancing the carbonation conversion and energy consumption. In that case, the overall energy consumption, including grinding, pumping, stirring, and rotating processes, was estimated to be 707 kWh/t-CO2. It was thus concluded that CO2 capture by accelerated carbonation of BOFS could be effectively and efficiently performed by coutilizing with CRW in an RPB.
Does Class-Size Reduction Close the Achievement Gap? Evidence from TIMSS 2011
Li, Wei; Konstantopoulos, Spyros
2017-01-01
Policies about reducing class size have been implemented in the US and Europe in the past decades. Only a few studies have discussed the effects of class size at different levels of student achievement, and their findings have been mixed. We employ quantile regression analysis, coupled with instrumental variables, to examine the causal effects of…
Eggersdorfer, Manfred; Bird, Julia K
2016-01-01
Multi-stakeholder partnerships are important facilitators of improving nutrition in developing countries to achieve the United Nations' Sustainable Development Goals. Often, the role of industry is challenged and questions are raised as to the ethics of involving for-profit companies in humanitarian projects. The Second International Conference on Nutrition placed great emphasis on the role of the private sector, including industry, in multi-stakeholder partnerships to reduce hunger and malnutrition. Governments have to establish regulatory frameworks and institutions to guarantee fair competition and invest in infrastructure that makes investments for private companies attractive, eventually leading to economic growth. Civil society organizations can contribute by delivering nutrition interventions and behavioral change-related communication to consumers, providing capacity, and holding governments and private sector organizations accountable. Industry provides technical support, innovation, and access to markets and the supply chain. The greatest progress and impact can be achieved if all stakeholders cooperate in multi-stakeholder partnerships aimed at improving nutrition, thereby strengthening local economies and reducing poverty and inequality. Successful examples of public-private partnerships exist, as well as examples in which these partnerships did not achieve mutually agreed objectives. The key requirements for productive alliances between industry and civil society organizations are the establishment of rules of engagement, transparency and mutual accountability. The Global Social Observatory performed a consultation on conflicts of interest related to the Scaling Up Nutrition movement and provided recommendations to prevent, identify, manage and monitor potential conflicts of interest. Multi-stakeholder partnerships can be successful models in improving nutrition if they meet societal demand with transparent decision-making and execution. Solutions to
Barbadoro, P; Martini, E; Gioia, M G; Stoico, R; Savini, S; Manso, E; Serafini, G; Prospero, E; D'Errico, M M
2017-02-07
The objective of this investigation was to analyze the effectiveness of a quality improvement initiative in limiting the spread of multidrug-resistant organisms (MDROs) in the hospital setting. During the period 2011-2013, a multimodal intervention was activated at a tertiary care center in Italy. The intervention included: laboratory-based surveillance, interdisciplinary training sessions, monitoring the adoption of isolation precautions and daily supervision provided by infection control nurses, and a monthly feedback. Time series analysis was used to evaluate the trends and correlations between the MDROs rate, intensity of checking rounds, and hospital-wide data (i.e., transfer of patients, patients' days, site of isolation, etc.). A total of 149,251 patients were included in the study. The proportion of patients undergoing transmission-based isolation precautions within 24 h from a positive laboratory finding increased from 83% in 2011 to 99% in 2013 (p < 0.05). The wards appropriately adopting the correct isolation precaution increased from 83% in 2011 to 97.6% in 2013 (p < 0.05). The frequency of controls was significantly reduced after the observation of compliance in the appropriate wards (p < 0.05). After three years, the incidence rate changed from 5.8/1000 days of stay [95% confidence interval (CI) 5.6-6.1] in 2011 to 4.7 (95% CI 4.4-4.9) in 2013 (p < 0.0001). Moreover, microorganisms isolated from different types of specimens showed variable potential for transmission (i.e., skin as the most potential and urine the least). The results demonstrate the efficacy of the multimodal intervention, with sustained reduction of MDROs rate, besides check reduction, and highlight the long-term efficacy of checking rounds in changing professionals' behaviors.
Goodall, R. G.; Painter, G. W.
1975-01-01
Conceptual nacelle designs for wide-bodied and for advanced-technology transports were studied with the objective of achieving significant reductions in community noise with minimum penalties in airplane weight, cost, and in operating expense by the application of advanced composite materials to nacelle structure and sound suppression elements. Nacelle concepts using advanced liners, annular splitters, radial splitters, translating centerbody inlets, and mixed-flow nozzles were evaluated and a preferred concept selected. A preliminary design study of the selected concept, a mixed flow nacelle with extended inlet and no splitters, was conducted and the effects on noise, direct operating cost, and return on investment determined.
Sasaki, Tomohiko; Kondo, Osamu
2016-09-01
Recent theoretical progress potentially refutes past claims that paleodemographic estimations are flawed by statistical problems, including age mimicry and sample bias due to differential preservation. The life expectancy at age 15 of the Jomon period prehistoric populace in Japan was initially estimated to have been ∼16 years while a more recent analysis suggested 31.5 years. In this study, we provide alternative results based on a new methodology. The material comprises 234 mandibular canines from Jomon period skeletal remains and a reference sample of 363 mandibular canines of recent-modern Japanese. Dental pulp reduction is used as the age-indicator, which because of tooth durability is presumed to minimize the effect of differential preservation. Maximum likelihood estimation, which theoretically avoids age mimicry, was applied. Our methods also adjusted for the known pulp volume reduction rate among recent-modern Japanese to provide a better fit for observations in the Jomon period sample. Without adjustment for the known rate in pulp volume reduction, estimates of Jomon life expectancy at age 15 were dubiously long. However, when the rate was adjusted, the estimate results in a value that falls within the range of modern hunter-gatherers, with significantly better fit to the observations. The rate-adjusted result of 32.2 years more likely represents the true life expectancy of the Jomon people at age 15, than the result without adjustment. Considering ∼7% rate of antemortem loss of the mandibular canine observed in our Jomon period sample, actual life expectancy at age 15 may have been as high as ∼35.3 years. © 2016 Wiley Periodicals, Inc.
Sabitha Gauni
2014-03-01
Full Text Available In the field of Wireless Communication, there is always a demand for reliability, improved range and speed. Many wireless networks such as OFDM, CDMA2000, WCDMA etc., provide a solution to this problem when incorporated with Multiple input- multiple output (MIMO technology. Due to the complexity in signal processing, MIMO is highly expensive in terms of area consumption. In this paper, a method of MIMO receiver design is proposed to reduce the area consumed by the processing elements involved in complex signal processing. In this paper, a solution for area reduction in the Multiple input multiple output(MIMO Maximum Likelihood Receiver(MLE using Sorted QR Decomposition and Unitary transformation method is analyzed. It provides unified approach and also reduces ISI and provides better performance at low cost. The receiver pre-processor architecture based on Minimum Mean Square Error (MMSE is compared while using Iterative SQRD and Unitary transformation method for vectoring. Unitary transformations are transformations of the matrices which maintain the Hermitian nature of the matrix, and the multiplication and addition relationship between the operators. This helps to reduce the computational complexity significantly. The dynamic range of all variables is tightly bound and the algorithm is well suited for fixed point arithmetic.
Gronewold, A. D.; Alameddine, I.; Anderson, R.; Wolpert, R.; Reckhow, K.
2008-12-01
The United States Environmental Protection Agency (USEPA) total maximum daily load (TMDL) program requires that individual states assess the condition of surface waters and identify those which fail to meet ambient water quality standards. Waters failing to meet those standards must have a TMDL assessment conducted to determine the maximum allowable pollutant load which can enter the water without violating water quality standards. While most of the nearly 30,000 TMDL assessments completed since 1995 use mechanistic or empirical water quality models to forecast water quality conditions under alternative pollutant loading reduction scenarios, few, if any, also simulate water quality conditions under alternative climate change scenarios. As a result, model-based loading reduction requirements (which serve as the cornerstone for implementing water resource management plans, and initiating environmental management infrastructure projects), believed to improve water quality in impaired waters and reinstate their designated use, may misrepresent the actual required reduction when future climate change scenarios are considered. For example, recent research indicates a potential long term future increase in both the number of days between, and the intensity of, individual precipitation events. In coastal terrestrial and aquatic ecosystems, such climate conditions could lead to an increased accumulation of pollutants on the landscape between precipitation events, followed by a washoff event with a relatively high pollutant load. On the other hand, anticipated increases in average temperature and evaporation rate might not only reduce effective rainfall rates (resulting in less energy for transporting pollutants from the landscape) but also reduce the tidal exchange ratio in shallow estuaries (many of which are valuable recreational, commercial, and aesthetic natural resources). Here, we develop and apply a comprehensive watershed-scale model for simulating water quality in
MacMahon Tone, J
2009-06-01
The aim of this research was to determine whether an intensive, nurse-led clinic could achieve recommended vascular risk reduction targets in patients with type 2 diabetes as compared to standard diabetes management.
Litman, T.
2005-12-02
This paper identified 12 transportation solutions that provide a combination of economic, social and environmental benefits. The win-win strategies are cost-effective, technically feasible policy reforms that correct market distortions which promote inefficient travel patterns. In addition to energy conservation and reducing pollution and traffic congestion, the strategies save on road and parking facilities, promote traffic safety and consumer savings, and improve mobility for non-drivers. The basic economic principles that make these benefits possible were examined. The proposed solutions create a more equitable and efficient transportation system that supports economic development and helps achieve other strategic planning objectives. The strategies include planning reforms; pay-as-you-drive pricing; parking cash-out; parking pricing; road pricing; transportation demand management programs; transit and ride-share improvements; walking and cycling improvements; smart growth; freight transport management; car sharing and revenue-neutral tax shifting. The author claims that if fully implemented, these strategies could reduce motor vehicle emissions and other costs by 30 to 50 per cent, depending on geographic, demographic and economic conditions. It was suggested that the approach could help meet Kyoto emission reduction targets while promoting economic development and increasing consumer benefits. 14 refs., 4 tabs., 2 figs.
Mudunuru, M. K.; Nakshatrala, K. B.
2016-01-01
We present a robust computational framework for advective-diffusive-reactive systems that satisfies maximum principles, the non-negative constraint, and element-wise species balance property. The proposed methodology is valid on general computational grids, can handle heterogeneous anisotropic media, and provides accurate numerical solutions even for very high Péclet numbers. The significant contribution of this paper is to incorporate advection (which makes the spatial part of the differential operator non-self-adjoint) into the non-negative computational framework, and overcome numerical challenges associated with advection. We employ low-order mixed finite element formulations based on least-squares formalism, and enforce explicit constraints on the discrete problem to meet the desired properties. The resulting constrained discrete problem belongs to convex quadratic programming for which a unique solution exists. Maximum principles and the non-negative constraint give rise to bound constraints while element-wise species balance gives rise to equality constraints. The resulting convex quadratic programming problems are solved using an interior-point algorithm. Several numerical results pertaining to advection-dominated problems are presented to illustrate the robustness, convergence, and the overall performance of the proposed computational framework.
Papaioannou, Spyros; Afnan, Masoud; Girling, Alan J; Coomarasamy, Aravinthan; Ola, Bolarinde; Olufowobi, Olufemi; McHugo, Josephine M; Hammadieh, Nahed; Sharif, Khaldoun
2002-08-01
Selective salpingography enables us to measure the Fallopian tube perfusion pressure which, when high, can be effectively reduced with the use of transcervical guide-wire tubal catheterization. Whether fertility prognosis improves as a result is currently unknown. Our objective was to clarify the issue. Infertile women undergoing selective salpingography were classified into poor, mediocre and good tubal perfusion pressure groups, based on the distribution of tubal perfusion pressures in an unselected infertile population. Of 325 women, 150 (46.1%) were classified in the poor group and underwent guide-wire tubal catheterization. Complete pregnancy and tubal perfusion pressure data were available for 104 (69.4%) subjects. Following tubal catheterization, 29 women (group A) could be classified in the good, 25 (group B) in the mediocre, while 50 women (group C) remained in the poor tubal perfusion pressure group. Survival analysis showed that the pregnancy rate in group A was significantly higher than the rates in groups B and C (P = 0.036 and 0.005 respectively). Reductions of tubal perfusion pressures achieved with transcervical guide-wire tubal catheterization resulted in an improved fertility prognosis for women. Selective salpingography and tubal catheterization might have a wider role in the management of the infertile couple than currently believed.
Kraft, Matthew A.
2013-01-01
Research has shown that "last hired, first fired" policies maximize the number of teachers subject to reductions in force by eliminating those teachers that are lowest on the pay scale first. Until now, advocates of effectiveness-based reduction-in-force (RIF) policies could only point to simulated policy exercises as evidence of the…
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Dalgaard, T., E-mail: tommy.dalgaard@agrsci.dk [Aarhus University, Department of Agroecology, Blichers Alle 20, P.O. Box 50, DK-8830 Tjele (Denmark); Olesen, J.E.; Petersen, S.O.; Petersen, B.M.; Jorgensen, U.; Kristensen, T.; Hutchings, N.J. [Aarhus University, Department of Agroecology, Blichers Alle 20, P.O. Box 50, DK-8830 Tjele (Denmark); Gyldenkaerne, S. [Aarhus University, National Environmental Research Institute, Frederiksborgvej 399, DK-4000 Roskilde (Denmark); Hermansen, J.E. [Aarhus University, Department of Agroecology, Blichers Alle 20, P.O. Box 50, DK-8830 Tjele (Denmark)
2011-11-15
Greenhouse gas (GHG) emissions from agriculture are a significant contributor to total Danish emissions. Consequently, much effort is currently given to the exploration of potential strategies to reduce agricultural emissions. This paper presents results from a study estimating agricultural GHG emissions in the form of methane, nitrous oxide and carbon dioxide (including carbon sources and sinks, and the impact of energy consumption/bioenergy production) from Danish agriculture in the years 1990-2010. An analysis of possible measures to reduce the GHG emissions indicated that a 50-70% reduction of agricultural emissions by 2050 relative to 1990 is achievable, including mitigation measures in relation to the handling of manure and fertilisers, optimization of animal feeding, cropping practices, and land use changes with more organic farming, afforestation and energy crops. In addition, the bioenergy production may be increased significantly without reducing the food production, whereby Danish agriculture could achieve a positive energy balance. - Highlights: > GHG emissions from Danish agriculture 1990-2010 are calculated, including carbon sequestration. > Effects of measures to further reduce GHG emissions are listed. > Land use scenarios for a substantially reduced GHG emission by 2050 are presented. > A 50-70% reduction of agricultural emissions by 2050 relative to 1990 is achievable. > Via bioenergy production Danish agriculture could achieve a positive energy balance. - Scenario studies of greenhouse gas mitigation measures illustrate the possible realization of CO{sub 2} reductions for Danish agriculture by 2050, sustaining current food production.
Kim, Hyoungjun; Han, Mooyoung; Lee, Ju Young
2012-05-01
Rainwater harvesting systems cannot only supplement on-site water needs, but also reduce water runoff and lessen downstream flooding. In this study, an existing analytic model for estimating the runoff in urban areas is modified to provide a more economical and effective model that can be used for describing rainwater harvesting. This model calculates the rainfall-runoff reduction by taking into account the catchment, storage tank, and infiltration facility of a water harvesting system; this calculation is based on the water balance equation, and the cumulative distribution, probability density, and average rainfall-runoff functions. This model was applied to a water harvesting system at the Seoul National University in order to verify its practicality. The derived model was useful for evaluating runoff reduction and for designing the storage tank capacity.
Grözinger, Gerd, E-mail: gerd.groezinger@med.uni-tuebingen.de [Department of Diagnostic Radiology, Department of Radiology, University of Tübingen (Germany); Wiesinger, Benjamin; Schmehl, Jörg; Kramer, Ulrich [Department of Diagnostic Radiology, Department of Radiology, University of Tübingen (Germany); Mehra, Tarun [Department of Dermatology, University of Tübingen (Germany); Grosse, Ulrich; König, Claudius [Department of Diagnostic Radiology, Department of Radiology, University of Tübingen (Germany)
2013-12-01
Purpose: The portosystemic pressure gradient is an important factor defining prognosis in hepatic disease. However, noninvasive prediction of the gradient and the possible reduction by establishment of a TIPSS is challenging. A cohort of patients receiving TIPSS was evaluated with regard to imaging features of collaterals in cross-sectional imaging and the achievable reduction of the pressure gradient by establishment of a TIPSS. Methods: In this study 70 consecutive patients with cirrhotic liver disease were retrospectively evaluated. Patients received either CT or MR imaging before invasive pressure measurement during TIPSS procedure. Images were evaluated with regard to esophageal and fundus varices, splenorenal collaterals, short gastric vein and paraumbilical vein. Results were correlated with Child stage, portosystemic pressure gradient and post-TIPSS reduction of the pressure gradient. Results: In 55 of the 70 patients TIPSS reduced the pressure gradient to less than 12 mmHg. The pre-interventional pressure and the pressure reduction were not significantly different between Child stages. Imaging features of varices and portosystemic collaterals did not show significant differences. The only parameter with a significant predictive value for the reduction of the pressure gradient was the pre-TIPSS pressure gradient (r = 0.8, p < 0.001). Conclusions: TIPSS allows a reliable reduction of the pressure gradient even at high pre-interventional pressure levels and a high collateral presence. In patients receiving TIPSS the presence and the characteristics of the collateral vessels seem to be too variable to draw reliable conclusions concerning the portosystemic pressure gradient.
Zhang, Shaojun; Wu, Ye; Zhao, Bin; Wu, Xiaomeng; Shu, Jiawei; Hao, Jiming
2017-01-01
The Yangtze River Delta (YRD) region is one of the most prosperous and densely populated regions in China and is facing tremendous pressure to mitigate vehicle emissions and improve air quality. Our assessment has revealed that mitigating vehicle emissions of NOx would be more difficult than reducing the emissions of other major vehicular pollutants (e.g., CO, HC and PM2.5) in the YRD region. Even in Shanghai, where the emission control implemented are more stringent than in Jiangsu and Zhejiang, we observed little to no reduction in NOx emissions from 2000 to 2010. Emission-reduction targets for HC, NOx and PM2.5 are determined using a response surface modeling tool for better air quality. We design city-specific emission control strategies for three vehicle-populated cities in the YRD region: Shanghai and Nanjing and Wuxi in Jiangsu. Our results indicate that even if stringent emission control consisting of the Euro 6/VI standards, the limitation of vehicle population and usage, and the scrappage of older vehicles is applied, Nanjing and Wuxi will not be able to meet the NOx emissions target by 2020. Therefore, additional control measures are proposed for Nanjing and Wuxi to further mitigate NOx emissions from heavy-duty diesel vehicles.
Eva Janousova
2016-08-01
Full Text Available We examined how penalized linear discriminant analysis with resampling, which is a supervised, multivariate, whole-brain reduction technique, can help schizophrenia diagnostics and research. In an experiment with magnetic resonance brain images of 52 first-episode schizophrenia patients and 52 healthy controls, this method allowed us to select brain areas relevant to schizophrenia, such as the left prefrontal cortex, the anterior cingulum, the right anterior insula, the thalamus and the hippocampus. Nevertheless, the classification performance based on such reduced data was not significantly better than the classification of data reduced by mass univariate selection using a t-test or unsupervised multivariate reduction using principal component analysis. Moreover, we found no important influence of the type of imaging features, namely local deformations or grey matter volumes, and the classification method, specifically linear discriminant analysis or linear support vector machines, on the classification results. However, we ascertained significant effect of a cross-validation setting on classification performance as classification results were overestimated even though the resampling was performed during the selection of brain imaging features. Therefore, it is critically important to perform cross-validation in all steps of the analysis (not only during classification in case there is no external validation set to avoid optimistically biasing the results of classification studies.
Wang, Mingyue; Gao, Yu; Peng, Yang; Zhao, Junyu; Chen, Xixue; Zhu, Xuejun
2016-03-01
Glucocorticoids are the first-line treatment for pemphigus vulgaris. Among 140 patients receiving systemic glucocorticoids, 124 patients achieved complete remission off or on a prednisone dose of ≤10 mg/day or less for 6 months or more. The mean average steroid controlling doses were 0.65, 0.62, 0.80, 1.08 and 1.38 mg/kg per day for the mucosal-dominant patients and the mild, moderate, severe and extensive cutaneous-involved patients, respectively (P pemphigus vulgaris within 3-6 years.
Odenberger, M. [Energy Conversion, Department of Energy and Environment, Chalmers University of Technology, SE 412 96 Goeteborg (Sweden); Johnsson, F. [Energy Conversion, Department of Energy and Environment, Chalmers University of Technology, SE 412 96 Goeteborg (Sweden)]. E-mail: filip.johnsson@me.chalmers.se
2007-04-15
This paper explores how investment in the UK electricity generation sector can contribute to the UK goal of reducing CO{sub 2} emissions with 60% by the year 2050 relative to the 1990 emissions. Considering likely development of the transportation sector and industry over the period, i.e. a continued demand growth and dependency on fossil fuels and electricity, the analysis shows that this implies CO{sub 2} emission reductions of up to 90% by 2050 for the electricity sector. Emphasis is put on limitations imposed by the present system, described by a detailed database of existing power plants, together with meeting targets on renewable electricity generation (RES) including assumptions on gas acting as backup technology for intermittent RES. In particular, it is investigated to what extent new fossil fuelled and nuclear power is required to meet the year 2050 demand as specified by the Royal Commission on Environmental Pollution (RCEP). In addition, the number of sites required for centralized electricity generation (large power plants) is compared with the present number of sites. A simulation model was developed for the analysis. The model applies the UK national targets on RES, taken from Renewable Obligation (RO) for 2010 and 2020 and potentials given by RCEP for 2050, and assumed technical lifetimes of the power plants of the existing system and thus, links this system with targets for the years 2010, 2020 and 2050. The results illustrate the problem with lock-in effects due to long capital stock turnover times, which can either lead to political difficulty meeting targets in established policy or costly early retirement of power plants (stranded assets) to comply with emission goals prescribed in Kyoto targets or the 60% emission reduction goal. Assuming typical technical lifetimes of the power plants it can be concluded that the present electricity generation system continues to play a significant role for several decades generating about 50% of projected
Odenberger, M.; Johnsson, F. [Chalmers University of Technology, Goeteborg (Sweden). Dept. of Energy and Environment
2007-04-15
This paper explores how investment in the UK electricity generation sector can contribute to the UK goal of reducing CO{sub 2} emissions with 60% by the year 2050 relative to the 1990 emissions. Considering likely development of the transportation sector and industry over the period, i.e. a continued demand growth and dependency on fossil fuels and electricity, the analysis shows that this implies CO{sub 2} emission reductions of up to 90% by 2050 for the electricity sector. Emphasis is put on limitations imposed by the present system, described by a detailed database of existing power plants, together with meeting targets on renewable electricity generation (RES) including assumptions on gas acting as backup technology for intermittent RES. In particular, it is investigated to what extent new fossil fuelled and nuclear power is required to meet the year 2050 demand as specified by the Royal Commission on Environmental Pollution (RCEP). In addition, the number of sites required for centralized electricity generation (large power plants) is compared with the present number of sites. A simulation model was developed for the analysis. The model applies the UK national targets on RES, taken from Renewable Obligation (RO) for 2010 and 2020 and potentials given by RCEP for 2050, and assumed technical lifetimes of the power plants of the existing system and thus, links this system with targets for the years 2010, 2020 and 2050. The results illustrate the problem with lock-in effects due to long capital stock turnover times, which can either lead to political difficulty meeting targets in established policy or costly early retirement of power plants (stranded assets) to comply with emission goals prescribed in Kyoto targets or the 60% emission reduction goal. Assuming typical technical lifetimes of the power plants it can be concluded that the present electricity generation system continues to play a significant role for several decades generating about 50% of projected
Cui, Zhentao; Wang, Shuguang; Zhang, Yihe; Cao, Minhua
2014-12-01
The porous NiO/NiCo2O4 nanotubes are prepared via a coaxial electrospinning technique followed by an annealing treatment. The resultant NiO/NiCo2O4 hybrid is developed as a highly efficient electrocatalyst, which exhibits significantly enhanced electrocatalytic activity, long-term operation stability, and tolerance to crossover effect compared to NiO nanofibers, NiCo2O4 nanofibers and commercial Pt(20%)/C for oxygen reduction reactions (ORR) in alkaline environment. The excellent electrocatalytic performance may be attributed to the unique microstructures of the porous NiO/NiCo2O4 nanotubes, such as heterogeneous hybrid structure, open porous tubular structure, and the well dispersity of the two components. Moreover, the promising and straightforward coaxial electrospinning proves itself to be an efficient pathway for the preparation of nanomaterials with tubular architectures and it can be used for large-scale production of catalysts in fuel cells.
Scogin, J. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-03-24
Thermogravimetric analysis with mass spectroscopy of the evolved gas (TGA-MS) is used to quantify the moisture content of materials in the 3013 destructive examination (3013 DE) surveillance program. Salts frequently present in the 3013 DE materials volatilize in the TGA and condense in the gas lines just outside the TGA furnace. The buildup of condensate can restrict the flow of purge gas and affect both the TGA operations and the mass spectrometer calibration. Removal of the condensed salts requires frequent maintenance and subsequent calibration runs to keep the moisture measurements by mass spectroscopy within acceptable limits, creating delays in processing samples. In this report, the feasibility of determining the total moisture from TGA-MS measurements at a lower temperature is investigated. A temperature of the TGA-MS analysis which reduces the complications caused by the condensation of volatile materials is determined. Analysis shows that an excellent prediction of the presently measured total moisture value can be made using only the data generated up to 700 °C and there is a sound physical basis for this estimate. It is recommended that the maximum temperature of the TGA-MS determination of total moisture for the 3013 DE program be reduced from 1000 °C to 700 °C. It is also suggested that cumulative moisture measurements at 550 °C and 700°C be substituted for the measured value of total moisture in the 3013 DE database. Using these raw values, any of predictions of the total moisture discussed in this report can be made.
Grob, Koni
2005-01-01
The most important initiatives taken in Switzerland to reduce exposure of consumers to acrylamide are the separate sale of potatoes low in reducing sugars for roasting and frying, the optimization of the raw material and preparation of french fries, and campaigns to implement suitable preparation methods in the gastronomy and homes. Industry works on improving a range of other products. Although these measures can reduce high exposures by some 80%, they have little effect on the background exposure resulting from coffee, bread, and numerous other products for which no substantial improvement is in sight. At this stage, improvements should be achieved by supporting voluntary activity rather than legal limits. Committed and consistent risk communication is key, and the support of improvements presupposes innovative approaches.
Huang, Pei; Li, Liang; Kotay, Shireen Meher; Goel, Ramesh
2014-04-15
Solids reduction in activated sludge processes (ASP) at source using process manipulation has been researched widely over the last two-decades. However, the absence of nutrient removal component, lack of understanding on the organic carbon, and limited information on key microbial community in solids minimizing ASP preclude the widespread acceptance of sludge minimizing processes. In this manuscript, we report simultaneous solids reduction through anaerobiosis along with nitrogen and phosphorus removals. The manuscript also reports carbon mass balance using stable isotope of carbon, microbial ecology of nitrifiers and polyphosphate accumulating organisms (PAOs). Two laboratory scale reactors were operated in anaerobic-aerobic-anoxic (A(2)O) mode. One reactor was run in the standard mode (hereafter called the control-SBR) simulating conventional A(2)O type of activated sludge process and the second reactor was run in the sludge minimizing mode (called the modified-SBR). Unlike other research efforts where the sludge minimizing reactor was maintained at nearly infinite solids retention time (SRT). To sustain the efficient nutrient removal, the modified-SBR in this research was operated at a very small solids yield rather than at infinite SRT. Both reactors showed consistent NH3-N, phosphorus and COD removals over a period of 263 days. Both reactors also showed active denitrification during the anoxic phase even if there was no organic carbon source available during this phase, suggesting the presence of denitrifying PAOs (DNPAOs). The observed solids yield in the modified-SBR was 60% less than the observed solids yield in the control-SBR. Specific oxygen uptake rate (SOUR) for the modified-SBR was almost 44% more than the control-SBR under identical feeding conditions, but was nearly the same for both reactors under fasting conditions. The modified-SBR showed greater diversity of ammonia oxidizing bacteria and PAOs compared to the control-SBR. The diversity of PAOs
Mitchell, N. A.; Gran, K. B.; Cho, S. J.; Dalzell, B. J.; Kumarasamy, K.
2015-12-01
A combination of factors including climate change, land clearing, and artificial drainage have increased many agricultural regions' stream flows and rates at which channel banks and bluffs are eroded. Increasing erosion rates within the Minnesota River Basin have contributed to higher sediment-loading rates, excess turbidity levels, and increases in sedimentation rates in Lake Pepin further downstream. Water storage sites (e.g., wetlands) have been discussed as a means to address these issues. This study uses the Soil and Water Assessment Tool (SWAT) to assess a range of water retention site (WRS) implementation scenarios in the Le Sueur watershed in south-central Minnesota, a subwatershed of the Minnesota River Basin. Sediment loading from bluffs was assessed through an empirical relationship developed from gauging data. Sites were delineated as topographic depressions with specific land uses, minimum areas (3000 m2), and high compound topographic index values. Contributing areas for the WRS were manually measured and used with different site characteristics to create 210 initial WRS scenarios. A generalized relationship between WRS area and contributing area was identified from measurements, and this relationship was used with different site characteristics (e.g., depth, hydraulic conductivity (K), and placement) to create 225 generalized WRS scenarios. Reductions in peak flow volumes and sediment-loading rates are generally maximized by placing site with high K values in the upper half of the watershed. High K values allow sites to lose more water through seepage, emptying their storages between precipitation events and preventing frequent overflowing. Reductions in peak flow volumes and sediment-loading rates also level off at high WRS extents due to the decreasing frequencies of high-magnitude events. The generalized WRS scenarios were also used to create a simplified empirical model capable of generating peak flows and sediment-loading rates from near
Angeliki Brouzgou
2016-10-01
Full Text Available Low temperature fuel cells (LTFCs are considered as clean energy conversion systems and expected to help address our society energy and environmental problems. Up-to-date, oxygen reduction reaction (ORR is one of the main hindering factors for the commercialization of LTFCs, because of its slow kinetics and high overpotential, causing major voltage loss and short-term stability. To provide enhanced activity and minimize loss, precious metal catalysts (containing expensive and scarcely available platinum are used in abundance as cathode materials. Moreover, research is devoted to reduce the cost associated with Pt based cathode catalysts, by identifying and developing Pt-free alternatives. However, so far none of them has provided acceptable performance and durability with respect to Pt electrocatalysts. By adopting new preparation strategies and by enhancing and exploiting synergetic and multifunctional effects, some elements such as transition metals supported on highly porous carbons have exhibited reasonable electrocatalytic activity. This review mainly focuses on the very recent progress of novel carbon based materials for ORR, including: (i development of three-dimensional structures; (ii synthesis of novel hybrid (metal oxide-nitrogen-carbon electrocatalysts; (iii use of alternative raw precursors characterized from three-dimensional structure; and (iv the co-doping methods adoption for novel metal-nitrogen-doped-carbon electrocatalysts. Among the examined materials, reduced graphene oxide-based hybrid electrocatalysts exhibit both excellent activity and long term stability.
Huang, Hongwen; Li, Kan; Chen, Zhao; Luo, Laihao; Gu, Yuqian; Zhang, Dongyan; Ma, Chao; Si, Rui; Yang, Jinlong; Peng, Zhenmeng; Zeng, Jie
2017-06-21
The research of active and sustainable electrocatalysts toward oxygen reduction reaction (ORR) is of great importance for industrial application of fuel cells. Here, we report a remarkable ORR catalyst with both excellent mass activity and durability based on sub 2 nm thick Rh-doped Pt nanowires, which combine the merits of high utilization efficiency of Pt atoms, anisotropic one-dimensional nanostructure, and doping of Rh atoms. Compared with commercial Pt/C catalyst, the Rh-doped Pt nanowires/C catalyst shows a 7.8 and 5.4-fold enhancement in mass activity and specific activity, respectively. The combination of extended X-ray absorption fine structure analysis and density functional theory calculations reveals that the compressive strain and ligand effect in Rh-doped Pt nanowires optimize the adsorption energy of hydroxyl and in turn enhance the specific activity. Moreover, even after 10000 cycles of accelerated durability test in O2 condition, the Rh-doped Pt nanowires/C catalyst exhibits a drop of 9.2% in mass activity, against a big decrease of 72.3% for commercial Pt/C. The improved durability can be rationalized by the increased vacancy formation energy of Pt atoms for Rh-doped Pt nanowires.
张玖霞; 方杰
2011-01-01
In this paper,Meihekou scale intensive arable land to achieve good results as the starting point,the transfer of land from the government guidance to promote,develop policies to create conditions for the scale,speed up the transfer of rural labor to expand the scale of operation in space in the analysis of Meihekou scale intensive arable land on the remarkable results.Meanwhile,for the land transfer process Meihekou exist in many non-standard issues,from land to carry out intensive,in order to achieve maximum efficiency of land use perspective,on how to do large-scale land operation Meihekou proposed measures.%本文以梅河口市做好耕地集约规模经营取得的成效为切入点,从政府引导推动土地流转,制定优惠扶持政策为规模经营创造条件,加快农村劳动力转移为规模经营拓展空间等方面分析了梅河口市在耕地集约规模经营上取得的显著成效。同时,针对梅河口市在土地流转过程中存在的问题,从实现土地使用效益最大化的视角,对梅河口市如何做好土地规模经营提出了相关的对策。
Jarman, Del W.; Boyland, Lori G.
2011-01-01
In recent years, economic downturn and changes to Indiana's school funding have resulted in significant financial reductions in General Fund allocations for many of Indiana's public school corporations. The main purpose of this statewide study is to examine the possible impacts of these budget reductions on class size and student achievement. This…
Leighty, Wayne Waterman
California's "80in50" target for reducing greenhouse gas emissions to 80 percent below 1990 levels by the year 2050 is based on climate science rather than technical feasibility of mitigation. As such, it raises four fundamental questions: is this magnitude of reduction in greenhouse gas emissions possible, what energy system transitions over the next 40 years are necessary, can intermediate policy goals be met on the pathway toward 2050, and does the path of transition matter for the objective of climate change mitigation? Scenarios for meeting the 80in50 goal in the transportation sector are modelled. Specifically, earlier work defining low carbon transport scenarios for the year 2050 is refined by incorporating new information about biofuel supply. Then transition paths for meeting 80in50 scenarios are modelled for the light-duty vehicle sub-sector, with important implications for the timing of action, rate of change, and cumulative greenhouse gas emissions. One aspect of these transitions -- development in the California wind industry to supply low-carbon electricity for plug-in electric vehicles -- is examined in detail. In general, the range of feasible scenarios for meeting the 80in50 target is narrow enough that several common themes are apparent: electrification of light-duty vehicles must occur; continued improvements in vehicle efficiency must be applied to improving fuel economy; and energy carriers must de-carbonize to less than half of the carbon intensity of gasoline and diesel. Reaching the 80in50 goal will require broad success in travel demand reduction, fuel economy improvements and low-carbon fuel supply, since there is little opportunity to increase emission reductions in one area if we experience failure in another. Although six scenarios for meeting the 80in50 target are defined, only one also meets the intermediate target of reducing greenhouse gas emissions to 1990 levels by the year 2020. Furthermore, the transition path taken to reach any
Campos, Pedro T.; Teixeira, Marcos A.; Kissel, Johannes [Gesellschaft Fuer Technische Zusammenarbeit (GTZ) (Germany)
2010-07-01
In the current context to encourage sustainable development, wind energy plays an important role in the spread of renewable energy sources. In this paper, the possibilities and difficulties of wind power integration in large-scale are evaluated, specifically in the northeastern region of Brazil. From the seasonal complementarity with the water source, scenarios are set out where the maximum participation of only these two sources in the energy supply of the region is sought. Aiming to evaluate the possibilities of a completely sustainable regional energy supply, the northeast subsystem is isolated, excluding, in principle, imports and exports. Therefore, the energy storage capacity of reservoirs in the region is used as a key factor, combined with the seasonal availability of data sources and the annual energy consumption of the region. (author)
M Lehnert
2017-04-01
Full Text Available The purpose of the study was to analyse the changes in muscle strength, power, and somatic parameters in elite volleyball players after a specific pre-season training programme aimed at improving jumping and strength performance and injury prevention. Twelve junior female volleyball players participated in an 8-week training programme. Anthropometric characteristics, isokinetic peak torque (PT single-joint knee flexion (H and extension (Q at 60º/s and 180º/s, counter movement jump (CMJ, squat jump (SJ, and reactive strength index (RSI were measured before and after intervention. Significant moderate effects were found in flexor concentric PT at 60º/s and at 180 º/s in the dominant leg (DL (18.3±15.1%, likely; 17.8±11.2%, very likely and in extensor concentric PT at 180º/s (7.4%±7.8%, very likely in the DL. In the non-dominant leg (NL significant moderate effects were found in flexor concentric PT at 60º/s and at 180º/s (13.7±11.3%, likely; 13.4±8.0%, very likely and in extensor concentric PT at 180º/s (10.7±11.5%, very likely. Small to moderate changes were observed for H/QCONV in the DL at 60º/s and 180º/s (15.9±14.1%; 9.6±10.4%, both likely and in the NL at 60º/s (moderate change, 9.6±11.8%, likely, and small to moderate decreases were detected for H/QFUNC at 180º/s, in both the DL and NL (-7.0±8.3%, likely; -9.5±10.0%, likely. Training-induced changes in jumping performance were trivial (for RSI to small (for CMJ and SJ. The applied pre-season training programme induced a number of positive changes in physical performance and risk of injury, despite a lack of changes in body mass and composition. CITATION: Lehnert M, Sigmund M, Lipinska P et al. Training-induced changes in physical performance can be achieved without body mass reduction after eight week of strength and injury prevention oriented programme in volleyball female players. Biol Sport. 2017;34(2:205-213.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Lobo-Ferreira, João-Paulo
2017-04-01
region. It included the evaluation of willingness of farmers to collaborate and pay for the use of Managed Aquifer Recharge (MAR) as a nature-based solution to minimize the drought impacts and to manage flood risk in the area. Close cooperation has been established between EIP Water Action Group MARsolutions and FP7 MARSOL INNO_DEMO (http://www.eip-water.eu/close-cooperation-between-eip-marsolutions-and-fp7-marsol-inno-demo-project ). In http://www.eip-water.eu/sites/default/files/Rel%20101_15.pdf a LNEC report is available, presenting a descriptive analysis of the responses to a survey about protection and preservation of groundwater conducted with a sample of Portuguese farmers of the Algarve region. It is possible that Direção Regional de Agricultura e Pescas do Algarve is willing to participate on the implementation the nature-based solutions as they will decrease the risk for agriculture losses. The Portuguese Water Agency has precipitation and flow bulletins for the Algarve, e.g. for Faro and Albufeira areas, in http://snirh.pt/index.php?idMain=1&idItem=1.1 . Concerning the climate change impact in Querença-Silves (QS) Aquifer, LNEC/University of Algarve MARSOL project teams presented descriptions regarding respectively groundwater recharge and flow simulations of future scenarios. E.g. Stigter et al. (2009, 2014) summarized achieved conclusions were "(1) (2020-2050) changes in recharge, particularly due to a reduction in autumn rainfall resulting in a longer dry period. More frequent droughts are predicted at the QS aquifer; (2) toward the end of the century (2069-2099), results indicate a significant decrease (mean 25 %) in recharge at QS aquifer, with an high decrease in absolute terms (mean 134 mm/year); and, (3) scenario modelling of groundwater flow shows its response to the predicted decreases in recharge and increases in pumping rates, with strongly reduced outflow into the coastal wetlands, whereas changes due to sea level rise are negligible". These
Kim, Y B; Kim, H W; Song, M K; Rhee, M S
2015-05-18
We developed a novel decontamination method to inactivate Escherichia coli O157:H7 on radish seeds without adversely affecting seed germination or product quality. The use of heat (55, 60, and 65 °C) combined with relative humidity (RH; 25, 45, 65, 85, and 100%) for 24h was evaluated for effective microbial reduction and preservation of seed germination rates. A significant two-way interaction of heat and RH was observed for both microbial reduction and germination rate (Pgermination rate (Pseeds (7.0 log CFU/g reduction) and had no significant effect on the germination rate (85.4%; P>0.05) or product quality. The method uses only heat and relative humidity without chemicals, and is thus applicable as a general decontamination procedure in spout producing plants where the use of growth chambers is the norm.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Iyer, Gokul C.; Clarke, Leon E.; Edmonds, James A.; Kyle, Gordon P.; Ledna, Catherine M.; McJeon, Haewon C.; Wise, M. A.
2017-05-01
The United States has articulated a deep decarbonization strategy for achieving a reduction in economy-wide greenhouse gas (GHG) emissions of 80% below 2005 levels by 2050. Achieving such deep emissions reductions will entail a major transformation of the energy system and of the electric power sector in particular. , This study uses a detailed state-level model of the U.S. energy system embedded within a global integrated assessment model (GCAM-USA) to demonstrate pathways for the evolution of the U.S. electric power sector that achieve 80% economy-wide reductions in GHG emissions by 2050. The pathways presented in this report are based on feedback received during a workshop of experts organized by the U.S. Department of Energy’s Office of Energy Policy and Systems Analysis. Our analysis demonstrates that achieving deep decarbonization by 2050 will require substantial decarbonization of the electric power sector resulting in an increase in the deployment of zero-carbon and low-carbon technologies such as renewables and carbon capture utilization and storage. The present results also show that the degree to which the electric power sector will need to decarbonize and low-carbon technologies will need to deploy depends on the nature of technological advances in the energy sector, the ability of end-use sectors to electrify and level of electricity demand.
Wang Run, E-mail: rwang@iue.ac.c [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, Xiamen 361021 (China); Xiamen Key Lab of Urban Metabolism, Xiamen 361021 (China); Liu Wenjuan [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, Xiamen 361021 (China); Xiao Lishan [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, Xiamen 361021 (China); Xiamen Key Lab of Urban Metabolism, Xiamen 361021 (China); Liu Jian; Kao, William [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, Xiamen 361021 (China)
2011-05-15
Following the announcement of the China's 2020 national target for the reduction of the intensity of carbon dioxide emissions per unit of GDP by 40-45% compared with 2005 levels, Chinese provincial governments prepared to restructure provincial energy policy and plan their contribution to realizing the State reduction target. Focusing on Fujian and Anhui provinces as case studies, this paper reviews two contrasting policies as a means for meeting the national reduction target. That of the coastal province of Fujian proposes to do so largely through the development of nuclear power, whilst the coal-rich province of Anhui proposes to do so through its energy consumption rate rising at a lower rate than that of the rise in GDP. In both cases renewable energy makes up a small proportion of their proposed 2020 energy structures. The conclusion discusses in depth concerns about nuclear power policy, energy efficiency, energy consumption strategy and problems in developing renewable energy. - Research Highlights: {yields} We review two contrasting policies as a means for meeting the national reduction target of carbon emission in two provinces. {yields} Scenario review of energy structure in Fujian and Anhui Provinces to 2020. {yields} We discuss concerns about nuclear power policy, energy efficiency, energy consumption strategy and problems in developing renewable energy.
Natsui, Masanori; Tamakoshi, Akira; Endoh, Tetsuo; Ohno, Hideo; Hanyu, Takahiro
2017-04-01
A magnetic-tunnel-junction (MTJ)-based video coding hardware with an MTJ-write-error-rate relaxation scheme as well as a nonvolatile storage capacity reduction technique is designed and fabricated in a 90 nm MOS and 75 nm perpendicular MTJ process. The proposed MTJ-oriented dynamic error masking scheme suppresses the effect of write operation errors on the operation result of LSI, which results in the increase in an acceptable MTJ write error rate up to 7.8 times with less than 6% area overhead, while achieving 79% power reduction compared with that of the static-random-access-memory-based one.
Fischer, Andreas; Aagaard Madsen, Helge
2014-01-01
The maximum fatigue load reduction potential when using trailing edge flaps on mega-watt wind turbines was explored. For this purpose an ideal feed forward control algorithm using the relative velocity and angle of attack at the blade to control the loads was implemented. The algorithm was applie...... the blade load location was investigated. When the algorithm was applied to measured time series a load reduction of 23% was achieved which is still promissing, but significantly lower than the value achieved in computations....
Schumacher, Katja; Graichen, Jakob; Healy, Sean [Oeko-Institut, Inst. fuer Angewandte Oekologie e.V., Freiburg im Breisgau (Germany); Schleich, Joachim; Duscha, Vicki [Fraunhofer-Institut fuer Systemtechnik und Innovationsforschung (ISI), Karlsruhe (Germany)
2011-08-15
This report explores the environmental and economic effects of the pledges submitted by industrialized and major developing countries for 2020 under the Copenhagen Accord and provides an in-depth comparison with results arrived at in other model analyses. Two scenarios reflect the lower (''weak'') and upper (''ambitious'') bounds of the Copenhagen pledges. In addition, two scenarios in accordance with the IPCC range for reaching a 2 C target are analyzed with industrialized countries in aggregate reducing their CO2 emissions by 30 % in 2020 compared to 1990 levels. For all four policy scenarios the effects of emission paths leading to a global reduction target of 50 % below 1990 levels in 2050 are also simulated for 2030. In addition, a separate scenario is carried out which estimates the costs of an unconditioned EU 30 % emission reduction target, i.e. where the EU adopts a 30 % emission reduction target in 2020 (rather than a 20 % reduction target), while all other countries stick with their ''weak'' pledges. Not included in the calculations is possible financial support for developing countries from industrialized countries as currently discussed in the climate change negotiations and laid out in the Copenhagen Accord. (orig.)
Vandyke, Barbara Adrienne
2009-01-01
For too long, educators have been left to their own devices when implementing educational policies, initiatives, strategies, and interventions, and they have longed to see the full benefits of these programs, especially in reading achievement. However, instead of determining whether a policy/initiative is working, educators have been asked to…
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Kim, Hyeyeon
2017-06-30
A paradigm shift towards noninvasive body contouring has occurred over the past few years. Radiofrequency (RF) is one popular treatment method. Noncontact-type RF systems with frequencies in the tens of megahertz represent a novel approach. The current pilot study investigated the efficacy of an interesting combination of extracorporeal shock wave therapy (ESWT) and an apoptosis-inducing RF (AiRF) system for circumferential reduction. Twenty-seven females, ages ranging from 13-69 years, (mean age 37.96 years) participated in the study. They were assigned to two treatment-based groups: Group A (n=19) and Group B (n=8). A voluntary daily dietary restriction plan of 500 kcal was put in place for all subjects. A combination of two different devices was used; an extracorporeal shock wave therapy (ESWT) system and a 27.12 MHz AiRF system. Either 4 (n=28) or 6 sessions (n=19) were given, one week apart. In Group A, the ESWT was applied before the RF with the reverse order of application in Group B. Weight and waist circumference were noted at baseline, then one week after the 4(th) and the 6(th) treatment sessions at which points clinical photography was also obtained. All patients showed statistically significant waist circumferential loss in both the 4- and 6-week treated groups: Group A, 6.3 cm and 8.8 cm; Group B, 5.9 cm and 6.4 cm, respectively. Greater circumference loss tended to be seen in Group A in both groups, but without statistical significance. No patient complained of pain during or after the treatment sessions, and there were no adverse events. This pilot study showed that the combination of ESWT and AiRF was safe and effective for significant waist circumferential reduction. The results tended to be better when ESWT was applied before AiRF, although the difference was not significant.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Lucas, C P; Patton, S; Stepke, T; Kinhal, V; Darga, L L; Carroll-Michals, L; Spafford, T R; Kasim, S
1987-09-18
Non-insulin-dependent diabetes mellitus (NIDDM) is the most common form of diabetes in the civilized world. Its consequences include microvascular and macrovascular disease, both of which appear to evolve from a common background of obesity and physical inactivity. The current study was undertaken in obese patients with NIDDM to see whether improvements could be made in glycemic control as well as in many cardiovascular risk factors (obesity, hypertension, lipid abnormalities, and physical inactivity) that are typical of this condition. Fifteen obese insulin-using patients with NIDDM (average body mass index, 34.0) were treated with a 500-calorie formula diet for eight to 12 weeks. Administration of insulin and diuretics was discontinued at the onset of the study. A eucaloric diet was begun at eight to 12 weeks and maintained until Week 24. A behaviorally oriented nutrition-exercise program was instituted at the beginning of the study. Glipizide or placebo was added (randomized) at Week 15 if the fasting plasma glucose level in patients exceeded 115 mg/dl. Patients lost an average of 22 pounds over the course of 24 weeks. Frequency and duration of physical activity increased significantly from baseline, as did the maximal oxygen consumption rate. Glycemic control by 15 weeks (without insulin) was similar to baseline (with insulin). With the addition of glipizide at Week 15, both fasting plasma glucose and glucose tolerance improved significantly. This improvement was not observed with placebo. In addition, both systolic and diastolic blood pressure decreased by about 10 mm Hg. There were no significant changes in the levels of serum lipids or glycosylated hemoglobin. In conclusion, a multifaceted intervention program, employing weight reduction, exercise, diet, and glipizide therapy, can be instituted in insulin-using patients with NIDDM, with improvement in glycemic control and in certain risk factors (hypertension, obesity, physical inactivity) for cardiovascular
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Cavender, Matthew A; Bhatt, Deepak L; Stone, Gregg W; White, Harvey D; Steg, Ph Gabriel; Gibson, C Michael; Hamm, Christian W; Price, Matthew J; Leonardi, Sergio; Prats, Jayne; Deliargyris, Efthymios N; Mahaffey, Kenneth W; Harrington, Robert A
2016-09-06
Cangrelor is an intravenous P2Y12 inhibitor approved to reduce periprocedural ischemic events in patients undergoing percutaneous coronary intervention not pretreated with a P2Y12 inhibitor. A total of 11 145 patients were randomized to cangrelor or clopidogrel in the CHAMPION PHOENIX trial (Cangrelor versus Standard Therapy to Achieve Optimal Management of Platelet Inhibition). We explored the effects of cangrelor on myocardial infarction (MI) using different definitions and performed sensitivity analyses on the primary end point of the trial. A total of 462 patients (4.2%) undergoing percutaneous coronary intervention had an MI as defined by the second universal definition. The majority of these MIs (n=433, 93.7%) were type 4a. Treatment with cangrelor reduced the incidence of MI at 48 hours (3.8% versus 4.7%; odds ratio [OR], 0.80; 95% confidence interval [CI], 0.67-0.97; P=0.02). When the Society of Coronary Angiography and Intervention definition of periprocedural MI was applied to potential ischemic events, there were fewer total MIs (n=134); however, the effects of cangrelor on MI remained significant (OR, 0.65; 95% CI, 0.46-0.92; P=0.01). Similar effects were seen in the evaluation of the effects of cangrelor on MIs with peak creatinine kinase-MB ≥10 times the upper limit of normal (OR, 0.64; 95% CI, 0.45-0.91) and those with peak creatinine kinase-MB ≥10 times the upper limit of normal, ischemic symptoms, or ECG changes (OR, 0.63; 95% CI, 0.48-0.84). MIs defined by any of these definitions were associated with increased risk of death at 30 days. Treatment with cangrelor reduced the composite end point of death, MI (Society of Coronary Angiography and Intervention definition), ischemia-driven revascularization, or Academic Research Consortium definite stent thrombosis (1.4% versus 2.1%; OR, 0.69; 95% CI, 0.51-0.92). MI in patients undergoing percutaneous coronary intervention, regardless of definition, remains associated with increased risk of death
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Are Reductions in Population Sodium Intake Achievable?
Levings, Jessica L.; Cogswell, Mary E.; Janelle Peralez Gunn
2014-01-01
The vast majority of Americans consume too much sodium, primarily from packaged and restaurant foods. The evidence linking sodium intake with direct health outcomes indicates a positive relationship between higher levels of sodium intake and cardiovascular disease risk, consistent with the relationship between sodium intake and blood pressure. Despite communication and educational efforts focused on lowering sodium intake over the last three decades data suggest average US sodium intake has r...
Achieving Cost Reduction Through Data Analytics.
Rocchio, Betty Jo
2016-10-01
The reimbursement structure of the US health care system is shifting from a volume-based system to a value-based system. Adopting a comprehensive data analytics platform has become important to health care facilities, in part to navigate this shift. Hospitals generate plenty of data, but actionable analytics are necessary to help personnel interpret and apply data to improve practice. Perioperative services is an important revenue-generating department for hospitals, and each perioperative service line requires a tailored approach to be successful in managing outcomes and controlling costs. Perioperative leaders need to prepare to use data analytics to reduce variation in supplies, labor, and overhead. Mercy, based in Chesterfield, Missouri, adopted a perioperative dashboard that helped perioperative leaders collaborate with surgeons and perioperative staff members to organize and analyze health care data, which ultimately resulted in significant cost savings. Copyright © 2016 AORN, Inc. Published by Elsevier Inc. All rights reserved.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Recursive support vector machines for dimensionality reduction.
Tao, Qing; Chu, Dejun; Wang, Jue
2008-01-01
The usual dimensionality reduction technique in supervised learning is mainly based on linear discriminant analysis (LDA), but it suffers from singularity or undersampled problems. On the other hand, a regular support vector machine (SVM) separates the data only in terms of one single direction of maximum margin, and the classification accuracy may be not good enough. In this letter, a recursive SVM (RSVM) is presented, in which several orthogonal directions that best separate the data with the maximum margin are obtained. Theoretical analysis shows that a completely orthogonal basis can be derived in feature subspace spanned by the training samples and the margin is decreasing along the recursive components in linearly separable cases. As a result, a new dimensionality reduction technique based on multilevel maximum margin components and then a classifier with high accuracy are achieved. Experiments in synthetic and several real data sets show that RSVM using multilevel maximum margin features can do efficient dimensionality reduction and outperform regular SVM in binary classification problems.
Henningsson, Stefan
2014-01-01
competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has...
Henningsson, Stefan
2016-01-01
competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has...
R.A.E. Thompson
1983-09-01
Full Text Available In approaching the subject of professionalism the author has chosen to focus on the practical aspects rather than the philosophical issues. In so doing an attempt is made to identify criteria which demonstrate the achievement of the essence of professionalism.
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... rate effective on the date the supplemental annuity begins, before any reduction for a private pension... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52...
Ionization and maximum energy of nuclei in shock acceleration theory
Morlino, Giovanni
2011-01-01
We study the acceleration of heavy nuclei at SNR shocks when the process of ionization is taken into account. Heavy atoms ($Z_N >$ few) in the interstellar medium which start the diffusive shock acceleration (DSA) are never fully ionized at the moment of injection. The ionization occurs during the acceleration process, when atoms already move relativistically. For typical environment around SNRs the photo-ionization due to the background galactic radiation dominates over Coulomb collisions. The main consequence of ionization is the reduction of the maximum energy which ions can achieve with respect to the standard result of the DSA. In fact the photo-ionization has a timescale comparable to the beginning of the Sedov-Taylor phase, hence the maximum energy is no more proportional to the nuclear charge, as predicted by standard DSA, but rather to the effective ions' charge during the acceleration process, which is smaller than the total nuclear charge $Z_N$. This result can have a direct consequence in the pred...
Fast Forward Maximum entropy reconstruction of sparsely sampled data.
Balsgart, Nicholas M; Vosegaard, Thomas
2012-10-01
We present an analytical algorithm using fast Fourier transformations (FTs) for deriving the gradient needed as part of the iterative reconstruction of sparsely sampled datasets using the forward maximum entropy reconstruction (FM) procedure by Hyberts and Wagner [J. Am. Chem. Soc. 129 (2007) 5108]. The major drawback of the original algorithm is that it required one FT and one evaluation of the entropy per missing datapoint to establish the gradient. In the present study, we demonstrate that the entire gradient may be obtained using only two FT's and one evaluation of the entropy derivative, thus achieving impressive time savings compared to the original procedure. An example: A 2D dataset with sparse sampling of the indirect dimension, with sampling of only 75 out of 512 complex points (15% sampling) would lack (512-75)×2=874 points per ν(2) slice. The original FM algorithm would require 874 FT's and entropy function evaluations to setup the gradient, while the present algorithm is ∼450 times faster in this case, since it requires only two FT's. This allows reduction of the computational time from several hours to less than a minute. Even more impressive time savings may be achieved with 2D reconstructions of 3D datasets, where the original algorithm required days of CPU time on high-performance computing clusters only require few minutes of calculation on regular laptop computers with the new algorithm.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
Parametric optimization of thermoelectric elements footprint for maximum power generation
Rezania, A.; Rosendahl, Lasse; Yin, Hao
2014-01-01
The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost-perform...
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
New Reductive Desulfurization Technology
无
2007-01-01
@@ The project for the research of the pulse plasma reductive desulfurization technology undertaken by Huazhong University of Science and Technology recently passed the research achievement appraisal in Wuhan, Hubei province.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Low-Power Maximum a Posteriori (MAP Algorithm for WiMAX Convolutional Turbo Decoder
Chitralekha Ngangbam
2013-05-01
Full Text Available We propose to design a Low-Power Memory-Reduced Traceback MAP iterative decoding of convolutional turbo code (CTC which has large data access with large memories consumption and verify the functionality by using simulation tool. The traceback maximum a posteriori algorithm (MAP decoding provides the best performance in terms of bit error rate (BER and reduce the power consumption of the state metric cache (SMC without losing the correction performance. The computation and accessing of different metrics reduce the size of the SMC with no requires complicated reversion checker, path selection, and reversion flag cache. Radix-2*2 and radix-4 traceback structures provide a tradeoff between power consumption and operating frequency for double-binary (DB MAP decoding. These two traceback structures achieve an around 25% power reduction of the SMC, and around 12% power reduction of the DB MAP decoders for WiMAX standard
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation.
Meyer, Karin
2016-08-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty-derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated-rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Nielsen, Søren R. K.; Köyüoglu, H. U.; Cakmak, A. S.
The maximum softening concept is based on the variation of the vibrational periods of a structure during a seismic event. Maximum softening damage indicators, which measure the maximum relative stiffness reduction caused by stiffness and strength deterioration of the actual structure, are calcula......The maximum softening concept is based on the variation of the vibrational periods of a structure during a seismic event. Maximum softening damage indicators, which measure the maximum relative stiffness reduction caused by stiffness and strength deterioration of the actual structure...
Experimental study on Cr(Ⅵ) reduction by Pseudomonas aeruginosa
LIU Yun-guo; XU Wei-hua; ZENG Guang-ming; TANG Chun-fang; LI Cheng-feng
2004-01-01
Investigation on Cr(Ⅵ) reduction was conducted using Pseudomonas aeruginosa. The study demonstrated that the Cr(Ⅵ) can be effectively reduced to Cr(Ⅲ) by Pseudomonas aeruginosa. The effects of the factors affecting Cr(Ⅵ) reduction rate including carbon source type, pH, initial Cr(Ⅵ) concentration and amount of cells inoculum were thoroughly studied. Malate was found to yield maximum biotransformation, followed by succinate and glucose, with the reduction rate of 60.86%, 43.76% and 28.86% respectively. The optimum pH for Cr(Ⅵ) reduction was 7.0, with reduction efficiency of 61.71% being achieved. With the increase of initial Cr(Ⅵ) concentration, the rate of Cr(Ⅵ) reduction decreased. The reduction was inhibited strongly when the initial Cr(Ⅵ) concentration increased to 157 mg/L. As the amount of cells inoculum increased, the rate of Cr(Ⅵ) reduction also increased. The mechanism of Cr(Ⅵ) reduction and final products were also analysed. The results suggested that the soluble enzymes appear to be responsible for Cr(Ⅵ) reduction by Pseudomonas aeruginosa, and the reduced Cr(Ⅲ) was not precipitated in the form of Cr(OH)3.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
V. Niranjan
2014-09-01
Full Text Available This paper introduces a new approach for enhancing the bandwidth of a low voltage CMOS current mirror. The proposed approach is based on utilizing body effect in a MOS transistor by connecting its gate and bulk terminals together for signal input. This results in boosting the effective transconductance of MOS transistor along with reduction of the threshold voltage. The proposed approach does not affect the DC gain of the current mirror. We demonstrate that the proposed approach features compatibility with widely used series-resistor technique for enhancing the current mirror bandwidth and both techniques have been employed simultaneously for maximum bandwidth enhancement. An important consequence of using both techniques simultaneously is the reduction of the series-resistor value for achieving the same bandwidth. This reduction in value is very attractive because a smaller resistor results in smaller chip area and less noise. PSpice simulation results using 180 nm CMOS technology from TSMC are included to prove the unique results. The proposed current mirror operates at 1Volt consuming only 102 µW and maximum bandwidth extension ratio of 1.85 has been obtained using the proposed approach. Simulation results are in good agreement with analytical predictions.
Maximum Likelihood Sequence Detection Receivers for Nonlinear Optical Channels
Gabriel N. Maggio
2015-01-01
Full Text Available The space-time whitened matched filter (ST-WMF maximum likelihood sequence detection (MLSD architecture has been recently proposed (Maggio et al., 2014. Its objective is reducing implementation complexity in transmissions over nonlinear dispersive channels. The ST-WMF-MLSD receiver (i drastically reduces the number of states of the Viterbi decoder (VD and (ii offers a smooth trade-off between performance and complexity. In this work the ST-WMF-MLSD receiver is investigated in detail. We show that the space compression of the nonlinear channel is an instrumental property of the ST-WMF-MLSD which results in a major reduction of the implementation complexity in intensity modulation and direct detection (IM/DD fiber optic systems. Moreover, we assess the performance of ST-WMF-MLSD in IM/DD optical systems with chromatic dispersion (CD and polarization mode dispersion (PMD. Numerical results for a 10 Gb/s, 700 km, and IM/DD fiber-optic link with 50 ps differential group delay (DGD show that the number of states of the VD in ST-WMF-MLSD can be reduced ~4 times compared to an oversampled MLSD. Finally, we analyze the impact of the imperfect channel estimation on the performance of the ST-WMF-MLSD. Our results show that the performance degradation caused by channel estimation inaccuracies is low and similar to that achieved by existing MLSD schemes (~0.2 dB.
Noise reduction in supersonic jets by nozzle fluidic inserts
Morris, Philip J.; McLaughlin, Dennis K.; Kuo, Ching-Wen
2013-08-01
Professor Philip Doak spent a very productive time as a consultant to the Lockheed-Georgia Company in the early 1970s. The focus of the overall research project was the prediction and reduction of noise from supersonic jets. Now, 40 years on, the present paper describes an innovative methodology and device for the reduction of supersonic jet noise. The goal is the development of a practical active noise reduction technique for low bypass ratio turbofan engines. This method introduces fluidic inserts installed in the divergent wall of a CD nozzle to replace hard-wall corrugation seals, which have been demonstrated to be effective by Seiner (2005) [1]. By altering the configuration and operating conditions of the fluidic inserts, active noise reduction for both mixing and shock noise has been obtained. Substantial noise reductions have been achieved for mixing noise in the maximum noise emission direction and in the forward arc for broadband shock-associated noise. To achieve these reductions (on the order of greater than 4 and 2 dB for the two main components respectively), practically achievable levels of injection mass flow rates have been used. The total injected mass flow rates are less than 4% of the core mass flow rate and the effective operating injection pressure ratio has been maintained at or below the same level as the nozzle pressure ratio of the core flow.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Optimal deployment of emissions reduction technologies for construction equipment.
Bari, Muhammad Ehsanul; Zietsman, Josias; Quadrifoglio, Luca; Farzaneh, Mohamadreza
2011-06-01
The objective of this research was to develop a multiobjective optimization model to deploy emissions reduction technologies for nonroad construction equipment to reduce emissions in a cost-effective and optimal manner. Given a fleet of construction equipment emitting different pollutants in the nonattainment (NA) and near -nonattainment (NNA) counties of a state and a set of emissions reduction technologies available for installation on equipment to control pollution/emissions, the model assists in determining the mix of technologies to be deployed so that maximum emissions reduction and fuel savings are achieved within a given budget. Three technologies considered for emissions reduction were designated as X, Y, and Z to keep the model formulation general so that it can be applied for any other set of technologies. Two alternative methods of deploying these technologies on a fleet of equipment were investigated with the methods differing in the technology deployment preference in the NA and NNA counties. The model having a weighted objective function containing emissions reduction benefits and fuel-saving benefits was programmed with C++ and ILOG-CPLEX. For demonstration purposes, the model was applied for a selected construction equipment fleet owned by the Texas Department of Transportation, located in NA and NNA counties of Texas, assuming the three emissions reduction technologies X, Y, and Z to represent, respectively, hydrogen enrichment, selective catalytic reduction, and fuel additive technologies. Model solutions were obtained for varying budget amounts to test the sensitivity of emissions reductions and fuel-savings benefits with increasing the budget. Different mixes of technologies producing maximum oxides of nitrogen (NO(x)) reductions and total combined benefits (emissions reductions plus fuel savings) were indicated at different budget ranges. The initial steep portion of the plots for NO(x) reductions and total combined benefits against budgets
Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization
Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue
2010-01-01
This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...
Supersonic Jet Noise Reduction Using Microjets
Gutmark, Ephraim; Cuppoletti, Dan; Malla, Bhupatindra
2013-11-01
Fluidic injection for jet noise reduction involves injecting secondary jets into a primary jet to alter the noise characteristics of the primary jet. A major challenge has been determining what mechanisms are responsible for noise reduction due to varying injector designs, injection parameters, and primary jets. The current study provides conclusive results on the effect of injector angle and momentum ux ratio on the acoustics and shock structure of a supersonic Md = 1.56 jet. It is shown that the turbulent mixing noise scales primarily with the injector momentum flux ratio. Increasing the injector momentum flux ratio increases streamwise vorticity generation and reduces peak turbulence levels. It is found that the shock-related noise components are most affected by the interaction of the shocks from the injectors with the primary shock structure of the jet. Increasing momentum flux ratio causes shock noise reduction until a limit where shock noise increases again. It is shown that the shock noise components and mixing noise components are reduced through fundamentally different mechanisms and maximum overall noise reduction is achieved by balancing the reduction of both components.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Performance of penalized maximum likelihood in estimation of genetic covariances matrices
Meyer Karin
2011-11-01
Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should
Tetens, Inge
claims in relation to intense sweeteners and contribution to the maintenance or achievement of a normal body weight, reduction of post-prandial glycaemic responses, maintenance of normal blood glucose concentrations, and maintenance of tooth mineralisation by decreasing tooth demineralisation...... sweeteners, which should replace sugars in foods and beverages in order to obtain the claimed effects. The Panel considers that intense sweeteners are sufficiently characterised in relation to the claimed effects....
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Ortiz, Isabel
2007-01-01
The paper reviews poverty trends and measurements, poverty reduction in historical perspective, the poverty-inequality-growth debate, national poverty reduction strategies, criticisms of the agenda and the need for redistribution, international policies for poverty reduction, and ultimately understanding poverty at a global scale. It belongs to a series of backgrounders developed at Joseph Stiglitz's Initiative for Policy Dialogue.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Sonoassisted microbial reduction of chromium.
Kathiravan, Mathur Nadarajan; Karthick, Ramalingam; Muthu, Naggapan; Muthukumar, Karuppan; Velan, Manickam
2010-04-01
This study presents sonoassisted microbial reduction of hexavalent chromium (Cr(VI)) using Bacillus sp. isolated from tannery effluent contaminated site. The experiments were carried out with free cells in the presence and absence of ultrasound. The optimum pH and temperature for the reduction of Cr(VI) by Bacillus sp. were found to be 7.0 and 37 degrees C, respectively. The Cr(VI) reduction was significantly influenced by the electron donors and among the various electron donors studied, glucose offered maximum reduction. The ultrasound-irradiated reduction of Cr(VI) with Bacillus sp. showed efficient Cr(VI) reduction. The percent reduction was found to increase with an increase in biomass concentration and decrease with an increase in initial concentration. The changes in the functional groups of Bacillus sp., before and after chromium reduction were observed with FTIR spectra. Microbial growth was described with Monod and Andrews model and best fit was observed with Andrews model.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
The maximum agreement subtree problem
Martin, Daniel M
2012-01-01
Given two binary phylogenetic trees on $n$ leaves, we show that they have a common subtree on at least $O((\\log{n})^{1/2-\\epsilon})$ leaves, thus improving on the previously known bound of $O(\\log\\log n)$. To achieve this bound, we combine different special cases: when one of the trees is balanced or when one of the trees is a caterpillar, we show a lower bound of $O(\\log n)$. Another ingredient is the proof that every binary tree contains a large balanced subtree or a large caterpillar, a result that is intersting on its own. Finally, we also show that, there is an $\\alpha > 0$ such that when both the trees are balanced, they have a common subtree on at least $O(n^\\alpha)$ leaves.
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Inaugural Maximum Values for Sodium in Processed Food Products in the Americas.
Campbell, Norm; Legowski, Barbara; Legetic, Branka; Nilson, Eduardo; L'Abbé, Mary
2015-08-01
Reducing dietary salt/sodium is one of the most cost-effective interventions to improve population health. There are five initiatives in the Americas that independently developed targets for reformulating foods to reduce salt/sodium content. Applying selection criteria, recommended by the Pan American Health Organization (PAHO)/World Health Organization (WHO) Technical Advisory Group on Dietary Salt/Sodium Reduction, a consortium of governments, civil society, and food companies (the Salt Smart Consortium) agreed to an inaugural set of regional maximum targets (upper limits) for salt/sodium levels for 11 food categories, to be achieved by December 2016. Ultimately, to substantively reduce dietary salt across whole populations, targets will be needed for the majority of processed and pre-prepared foods. Cardiovascular and hypertension organizations are encouraged to utilize the regional targets in advocacy and in monitoring and evaluation of progress by the food industry.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Economic total maximum daily load for watershed-based pollutant trading.
Zaidi, A Z; deMonsabert, S M
2015-04-01
Water quality trading (WQT) is supported by the US Environmental Protection Agency (USEPA) under the framework of its total maximum daily load (TMDL) program. An innovative approach is presented in this paper that proposes post-TMDL trade by calculating pollutant rights for each pollutant source within a watershed. Several water quality trading programs are currently operating in the USA with an objective to achieve overall pollutant reduction impacts that are equivalent or better than TMDL scenarios. These programs use trading ratios for establishing water quality equivalence among pollutant reductions. The inbuilt uncertainty in modeling the effects of pollutants in a watershed from both the point and nonpoint sources on receiving waterbodies makes WQT very difficult. A higher trading ratio carries with it increased mitigation costs, but cannot ensure the attainment of the required water quality with certainty. The selection of an applicable trading ratio, therefore, is not a simple process. The proposed approach uses an Economic TMDL optimization model that determines an economic pollutant reduction scenario that can be compared with actual TMDL allocations to calculate selling/purchasing rights for each contributing source. The methodology is presented using the established TMDLs for the bacteria (fecal coliform) impaired Muddy Creek subwatershed WAR1 in Rockingham County, Virginia, USA. Case study results show that an environmentally and economically superior trading scenario can be realized by using Economic TMDL model or any similar model that considers the cost of TMDL allocations.
PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation
Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.
2007-06-23
In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.
Target weight achievement and ultrafiltration rate thresholds: potential patient implications.
Flythe, Jennifer E; Assimon, Magdalene M; Overman, Robert A
2017-06-02
have unfavorable facility target weight measure scores. Without TT extension or IDWG reduction, UF rate threshold (13 mL/h/kg) implementation led to an average theoretical 1-month, fluid-related weight gain of 1.4 ± 3.0 kg. Target weight achievement patterns vary across clinical subgroups. Implementation of a maximum UF rate threshold without adequate attention to extracellular volume status may lead to fluid-related weight gain.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Farrell, A P; Steffensen, J F
1987-01-01
The maximum aerobic swimming speed of Chinook salmon (Oncorhynchus tshawytscha) was measured before and after ligation of the coronary artery. Coronary artery ligation prevented blood flow to the compact layer of the ventricular myocardium, which represents 30% of the ventricular mass, and produced...... a statistically significant 35.5% reduction in maximum swimming speed. We conclude that the coronary circulation is important for maximum aerobic swimming and implicit in this conclusion is that maximum cardiac performance is probably necessary for maximum aerobic swimming performance....
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
Training Concept, Evolution Time, and the Maximum Entropy Production Principle
Alexey Bezryadin
2016-04-01
Full Text Available The maximum entropy production principle (MEPP is a type of entropy optimization which demands that complex non-equilibrium systems should organize such that the rate of the entropy production is maximized. Our take on this principle is that to prove or disprove the validity of the MEPP and to test the scope of its applicability, it is necessary to conduct experiments in which the entropy produced per unit time is measured with a high precision. Thus we study electric-field-induced self-assembly in suspensions of carbon nanotubes and realize precise measurements of the entropy production rate (EPR. As a strong voltage is applied the suspended nanotubes merge together into a conducting cloud which produces Joule heat and, correspondingly, produces entropy. We introduce two types of EPR, which have qualitatively different significance: global EPR (g-EPR and the entropy production rate of the dissipative cloud itself (DC-EPR. The following results are obtained: (1 As the system reaches the maximum of the DC-EPR, it becomes stable because the applied voltage acts as a stabilizing thermodynamic potential; (2 We discover metastable states characterized by high, near-maximum values of the DC-EPR. Under certain conditions, such efficient entropy-producing regimes can only be achieved if the system is allowed to initially evolve under mildly non-equilibrium conditions, namely at a reduced voltage; (3 Without such a “training” period the system typically is not able to reach the allowed maximum of the DC-EPR if the bias is high; (4 We observe that the DC-EPR maximum is achieved within a time, Te, the evolution time, which scales as a power-law function of the applied voltage; (5 Finally, we present a clear example in which the g-EPR theoretical maximum can never be achieved. Yet, under a wide range of conditions, the system can self-organize and achieve a dissipative regime in which the DC-EPR equals its theoretical maximum.
Construction and enumeration of Boolean functions with maximum algebraic immunity
ZHANG WenYing; WU ChuanKun; LIU XiangZhong
2009-01-01
Algebraic immunity is a new cryptographic criterion proposed against algebraic attacks. In order to resist algebraic attacks, Boolean functions used in many stream ciphers should possess high algebraic immunity. This paper presents two main results to find balanced Boolean functions with maximum algebraic immunity. Through swapping the values of two bits, and then generalizing the result to swap some pairs of bits of the symmetric Boolean function constructed by Dalai, a new class of Boolean functions with maximum algebraic immunity are constructed. Enumeration of such functions is also given. For a given function p(x) with deg(p(x)) < [n/2], we give a method to construct functions in the form p(x)+q(x) which achieve the maximum algebraic immunity, where every term with nonzero coefficient in the ANF of q(x) has degree no less than [n/2].
The evolution of maximum body size of terrestrial mammals.
Smith, Felisa A; Boyer, Alison G; Brown, James H; Costa, Daniel P; Dayan, Tamar; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; McCain, Christy; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D
2010-11-26
The extinction of dinosaurs at the Cretaceous/Paleogene (K/Pg) boundary was the seminal event that opened the door for the subsequent diversification of terrestrial mammals. Our compilation of maximum body size at the ordinal level by sub-epoch shows a near-exponential increase after the K/Pg. On each continent, the maximum size of mammals leveled off after 40 million years ago and thereafter remained approximately constant. There was remarkable congruence in the rate, trajectory, and upper limit across continents, orders, and trophic guilds, despite differences in geological and climatic history, turnover of lineages, and ecological variation. Our analysis suggests that although the primary driver for the evolution of giant mammals was diversification to fill ecological niches, environmental temperature and land area may have ultimately constrained the maximum size achieved.
Radiation engineering of optical antennas for maximum field enhancement.
Seok, Tae Joon; Jamshidi, Arash; Kim, Myungki; Dhuey, Scott; Lakhani, Amit; Choo, Hyuck; Schuck, Peter James; Cabrini, Stefano; Schwartzberg, Adam M; Bokor, Jeffrey; Yablonovitch, Eli; Wu, Ming C
2011-07-13
Optical antennas have generated much interest in recent years due to their ability to focus optical energy beyond the diffraction limit, benefiting a broad range of applications such as sensitive photodetection, magnetic storage, and surface-enhanced Raman spectroscopy. To achieve the maximum field enhancement for an optical antenna, parameters such as the antenna dimensions, loading conditions, and coupling efficiency have been previously studied. Here, we present a framework, based on coupled-mode theory, to achieve maximum field enhancement in optical antennas through optimization of optical antennas' radiation characteristics. We demonstrate that the optimum condition is achieved when the radiation quality factor (Q(rad)) of optical antennas is matched to their absorption quality factor (Q(abs)). We achieve this condition experimentally by fabricating the optical antennas on a dielectric (SiO(2)) coated ground plane (metal substrate) and controlling the antenna radiation through optimizing the dielectric thickness. The dielectric thickness at which the matching condition occurs is approximately half of the quarter-wavelength thickness, typically used to achieve constructive interference, and leads to ∼20% higher field enhancement relative to a quarter-wavelength thick dielectric layer.
Dienemann, Jacqueline
2002-01-01
This article examines one outcome of leadership: productive achievement. Without achievement one is judged to not truly be a leader. Thus, the ideal leader must be a visionary, a critical thinker, an expert, a communicator, a mentor, and an achiever of organizational goals. This article explores the organizational context that supports achievement, measures of quality nursing care, fiscal accountability, leadership development, rewards and punishments, and the educational content and teaching strategies to prepare graduates to be achievers.
Ulrich, Clara; Vermard, Youen; Dolder, Paul J.
2016-01-01
ranges to combine long-term single-stock targets with flexible, short-term, mixed-fisheries management requirements applied to the main North Sea demersal stocks. It is shown that sustained fishing at the upper bound of the range may lead to unacceptable risks when technical interactions occur....... An objective method is suggested that provides an optimal set of fishing mortality within the range, minimizing the risk of total allowable catch mismatches among stocks captured within mixed fisheries, and addressing explicitly the trade-offs between the most and least productive stocks....
A 21st Century Navy Vision: Motivating Sailors to Achieve Maximum Warfighting Readiness
2001-06-01
Alternate Deployment Schedule ...………………………………………….…… 137 27. Time to Deploy as a Function of Months Since Previous Deployment ….……... 138 28...fundamental human resource concepts. Social changes, however, have a big impact on the way employers motivate their workers. Rebecca Rimel , Executive...personal electrical safety program.171 Similar steps are being made towards reduced manning and generally, doing daily functions more efficiently. The
Drag Reduction of Bacterial Cellulose Suspensions
Satoshi Ogata
2011-01-01
Full Text Available Drag reduction due to bacterial cellulose suspensions with small environmental loading was investigated. Experiments were carried out by measuring the pressure drop in pipe flow. It was found that bacterial cellulose suspensions give rise to drag reduction in the turbulent flow range. We observed a maximum drag reduction ratio of 11% and found that it increased with the concentration of the bacterial cellulose suspension. However, the drag reduction effect decreased in the presence of mechanical shear.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Managing Air Quality - Control Strategies to Achieve Air Pollution Reduction
Considerations in designing an effective control strategy related to air quality, controlling pollution sources, need for regional or national controls, steps to developing a control strategy, and additional EPA resources.
Achieving cost reductions in EOSDIS operations through technology evolution
Newsome, Penny; Moe, Karen; Harberts, Robert
1996-01-01
The earth observing system (EOS) data information system (EOSDIS) mission includes the cost-effective management and distribution of large amounts of data to the earth science community. The effect of the introduction of new information system technologies on the evolution of EOSDIS is considered. One of the steps taken by NASA to enable the introduction of new information system technologies into the EOSDIS is the funding of technology development through prototyping. Recent and ongoing prototyping efforts and their potential impact on the performance and cost-effectiveness of the EOSDIS are discussed. The technology evolution process as it related to the effective operation of EOSDIS is described, and methods are identified for the support of the transfer of relevant technology to EOSDIS components.
Applications of Should Cost to Achieve Cost Reductions
2014-04-01
Defense Acquisition University DDG . . . . . . . . . Guided Missile Destroyer Demo . . . . . . . . . . . . . . . . . Demonstration Dev ...Integrated Fire Control Network IMU . . . . . . . . . . Inertial Measurment Unit IOC . . . . . . .Initial Operational Capability IOT . . . . . . . . . Initial
Maximum organic loading rate for the single-stage wet anaerobic digestion of food waste.
Nagao, Norio; Tajima, Nobuyuki; Kawai, Minako; Niwa, Chiaki; Kurosawa, Norio; Matsuyama, Tatsushi; Yusoff, Fatimah Md; Toda, Tatsuki
2012-08-01
Anaerobic digestion of food waste was conducted at high OLR from 3.7 to 12.9 kg-VS m(-3) day(-1) for 225 days. Periods without organic loading were arranged between the each loading period. Stable operation at an OLR of 9.2 kg-VS (15.0 kg-COD) m(-3) day(-1) was achieved with a high VS reduction (91.8%) and high methane yield (455 mL g-VS-1). The cell density increased in the periods without organic loading, and reached to 10.9×10(10) cells mL(-1) on day 187, which was around 15 times higher than that of the seed sludge. There was a significant correlation between OLR and saturated TSS in the sludge (y=17.3e(0.1679×), r(2)=0.996, P<0.05). A theoretical maximum OLR of 10.5 kg-VS (17.0 kg-COD) m(-3) day(-1) was obtained for mesophilic single-stage wet anaerobic digestion that is able to maintain a stable operation with high methane yield and VS reduction.
Nonuniform sampling and maximum entropy reconstruction in multidimensional NMR.
Hoch, Jeffrey C; Maciejewski, Mark W; Mobli, Mehdi; Schuyler, Adam D; Stern, Alan S
2014-02-18
NMR spectroscopy is one of the most powerful and versatile analytic tools available to chemists. The discrete Fourier transform (DFT) played a seminal role in the development of modern NMR, including the multidimensional methods that are essential for characterizing complex biomolecules. However, it suffers from well-known limitations: chiefly the difficulty in obtaining high-resolution spectral estimates from short data records. Because the time required to perform an experiment is proportional to the number of data samples, this problem imposes a sampling burden for multidimensional NMR experiments. At high magnetic field, where spectral dispersion is greatest, the problem becomes particularly acute. Consequently multidimensional NMR experiments that rely on the DFT must either sacrifice resolution in order to be completed in reasonable time or use inordinate amounts of time to achieve the potential resolution afforded by high-field magnets. Maximum entropy (MaxEnt) reconstruction is a non-Fourier method of spectrum analysis that can provide high-resolution spectral estimates from short data records. It can also be used with nonuniformly sampled data sets. Since resolution is substantially determined by the largest evolution time sampled, nonuniform sampling enables high resolution while avoiding the need to uniformly sample at large numbers of evolution times. The Nyquist sampling theorem does not apply to nonuniformly sampled data, and artifacts that occur with the use of nonuniform sampling can be viewed as frequency-aliased signals. Strategies for suppressing nonuniform sampling artifacts include the careful design of the sampling scheme and special methods for computing the spectrum. Researchers now routinely report that they can complete an N-dimensional NMR experiment 3(N-1) times faster (a 3D experiment in one ninth of the time). As a result, high-resolution three- and four-dimensional experiments that were prohibitively time consuming are now practical
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
NONE
2006-07-01
Achieving a fourfold reduction of in greenhouse gas emissions by 2050 is the ambitious and voluntary objective for France that addresses a combination of many different aspects (technical, technological, economic, social) against a backdrop of important issues and choices for public policy-makers. This document is the bilingual version of the factor 4 group report. It discusses the Factor 4 objectives, the different proposed scenario and the main lessons learned, the strategies to support the Factor 4 objectives (fostering changes in behavior and defining the role of public policies), the Factor 4 objective in international and european contexts (experience aboard, strategic behavior, constraints and opportunities, particularly in europe) and recommendations. (A.L.B.)
Comparing Science Achievement Constructs: Targeted and Achieved
Ferrara, Steve; Duncan, Teresa
2011-01-01
This article illustrates how test specifications based solely on academic content standards, without attention to other cognitive skills and item response demands, can fall short of their targeted constructs. First, the authors inductively describe the science achievement construct represented by a statewide sixth-grade science proficiency test.…
Designing microstructures for sodium reduction
Chiu, N. X. N.
2016-01-01
The aim of this project was to develop the tools and knowledge to reduce dietary sodium by mitigating restrictions to flavour delivery and enhancing saltiness perception through sodium contrast effects in the mouth. This is achieved by restructuring semi-solid and liquid model food systems to achieve maximum flavour delivery for enhanced perception. The project considered two model systems: stable foams and double emulsions. Stable foams were developed to evaluate air inclusions as a p...
40 CFR 142.61 - Variances from the maximum contaminant level for fluoride.
2010-07-01
... level for fluoride. 142.61 Section 142.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... from the maximum contaminant level for fluoride. (a) The Administrator, pursuant to section 1415(a)(1... means generally available for achieving compliance with the Maximum Contaminant Level for fluoride. (1...
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
Haseli, Y
2016-05-01
The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
2014-01-01
This research is devoted to analyzing the dynamic loads generated in the glove machine at reciprocating motion of knitting and intermediate carriages. Proposed is a method for determining the maximum dynamic loads in the glove machine carriages’ drive. It is noted that the dynamic loads reduction can be achieved by equipping the drive with energy accumulation and compensation units, in which quality it is expedient to use the cylindrical compression springs. The obtained dependence allows to ...
Peyronie's Reconstruction for Maximum Length and Girth Gain: Geometrical Principles
Paulo H. Egydio
2008-01-01
Full Text Available Peyronie's disease has been associated with penile shortening and some degree of erectile dysfunction. Surgical reconstruction should be based on giving a functional penis, that is, rectifying the penis with rigidity enough to make the sexual intercourse. The procedure should be discussed preoperatively in terms of length and girth reconstruction in order to improve patient satisfaction. The tunical reconstruction for maximum penile length and girth restoration should be based on the maximum length of the dissected neurovascular bundle possible and the application of geometrical principles to define the precise site and size of tunical incision and grafting procedure. As penile rectification and rigidity are required to achieve complete functional restoration of the penis and 20 to 54% of patients experience associated erectile dysfunction, penile straightening alone may not be enough to provide complete functional restoration. Therefore, phosphodiesterase inhibitors, self-injection, or penile prosthesis may need to be added in some cases.
Probabilistic maximum-value wind prediction for offshore environments
Staid, Andrea; Pinson, Pierre; Guikema, Seth D.
2015-01-01
, and probabilistic forecasts result in greater value to the end-user. The models outperform traditional baseline forecast methods and achieve low predictive errors on the order of 1–2 m s−1. We show the results of their predictive accuracy for different lead times and different training methodologies....... statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...
Efficiency of the Ultrasound Guided Hydrostatic Reduction of Intussusception with Normal Saline
Azim Motamed Far
2010-05-01
Full Text Available Background/Objective: Conventional hydrostatic barium reduction or pneumatic reduction of intussusception is associated with considerable ionizing radiation. The aim of this study was to evaluate ultrasound guided hydrostatic reduction of childhood intussusceptions using water enema."nPatients and Methods: The reduction was attempted using a warm water reservoir at a height of 80-120 cm above the table. The vital signs and the abrupt accumulation of liquid in the abdomen were controlled during the procedure. Reduction of intussuscepted loops was controlled by sonography, the observed fluid surrounding the intussusception was gradually reduced through the ileocecal valve. The procedure was continued until the intussusceptum disappeared completely and the distal ileum was filled with water. The reduction was stopped for 10-15 minutes in those patients in whom the procedure failed after a 10-minute effort. Then the procedure was applied for maximum three times and manual abdominal palpation was applied in some patients to ease the reduction. After successful reduction, the abdomen was re-examined to detect any lead point or recurrence of intussusception. The patients were then transferred to the surgery ward and after 24 hours, the regular oral diet was started and the patients were discharged. Partially reduced cases of intussusception underwent surgical treatment. Fisher exact test was used for the assessment of the relation between intussusception sites and the hydrostatic reduction outcome and/or presence of gangrenous bowel."nResults: Complete reduction was achieved in 21 of 24 patients (87.5%. Hydrostatic reduction was impossible in three patients, bowel resection was performed in 2 patients and the intussusceptions were surgically reduced in one patient."nConclusion: Intussusception is the most common abdominal emergency of early childhood for which non-operative reduction is currently the treatment of choice."nKeywords: Intussusception
Potential effect of salt reduction in processed foods on health.
Hendriksen, Marieke A H; Hoogenveen, Rudolf T; Hoekstra, Jeljer; Geleijnse, Johanna M; Boshuizen, Hendriek C; van Raaij, Joop M A
2014-03-01
Excessive salt intake has been associated with hypertension and increased cardiovascular disease morbidity and mortality. Reducing salt intake is considered an important public health strategy in the Netherlands. The objective was to evaluate the health benefits of salt-reduction strategies related to processed foods for the Dutch population. Three salt-reduction scenarios were developed: 1) substitution of high-salt foods with low-salt foods, 2) a reduction in the sodium content of processed foods, and 3) adherence to the recommended maximum salt intake of 6 g/d. Health outcomes were obtained in 2 steps: after salt intake was modeled into blood pressure levels, the Chronic Disease Model was used to translate modeled blood pressures into incidences of cardiovascular diseases, disability-adjusted life years (DALYs), and life expectancies. Health outcomes of the scenarios were compared with health outcomes obtained with current salt intake. In total, 4.8% of acute myocardial infarction cases, 1.7% of congestive heart failure cases, and 5.8% of stroke cases might be prevented if salt intake meets the recommended maximum intake. The burden of disease might be reduced by 56,400 DALYs, and life expectancy might increase by 0.15 y for a 40-y-old individual. Substitution of foods with comparable low-salt alternatives would lead to slightly higher salt intake reductions and thus to more health gain. The estimates for sodium reduction in processed foods would be slightly lower. Substantial health benefits might be achieved when added salt is removed from processed foods and when consumers choose more low-salt food alternatives.
Griffing, Bill; /Fermilab
2005-06-01
In a recent DOE Program Review, Fermilab's director presented results of the laboratory's effort to reduce the injury rate over the last decade. The results, shown in the figure below, reveal a consistent and dramatic downward trend in OSHA recordable injuries at Fermilab. The High Energy Physics Program Office has asked Fermilab to report in detail on how the laboratory has achieved the reduction. In fact, the reduction in the injury rate reflects a change in safety culture at Fermilab, which has evolved slowly over this period, due to a series of events, both planned and unplanned. This paper attempts to describe those significant events and analyze how each of them has shaped the safety culture that, in turn, has reduced the rate of injury at Fermilab to its current value.
2007-05-01
The latest version of the NHS Institute for Innovation and Improvement's 'no delays achiever', a web based tool created to help NHS organisations achieve the 18-week target for GP referrals to first treatment, is available at www.nodelaysachiever.nhs.uk.
Maximum modulation of plasmon-guided modes by graphene gating
Radko, Ilya; Bozhevolnyi, Sergey I.; Grigorenko, Alexander N.
2016-01-01
The potential of graphene in plasmonic electro-optical waveguide modulators has been investigated in detail by finite-element method modelling of various widely used plasmonic waveguiding configurations. We estimated the maximum possible modulation depth values one can achieve with plasmonic...... devices operating at telecom wavelengths and exploiting the optical Pauli blocking effect in graphene. Conclusions and guidelines for optimization of modulation/intrinsic loss trade-off have been provided and generalized for any graphene-based plasmonic waveguide modulators, which should help...
Mapping the MPM maximum flow algorithm on GPUs
Solomon, Steven; Thulasiraman, Parimala
2010-11-01
The GPU offers a high degree of parallelism and computational power that developers can exploit for general purpose parallel applications. As a result, a significant level of interest has been directed towards GPUs in recent years. Regular applications, however, have traditionally been the focus of work on the GPU. Only very recently has there been a growing number of works exploring the potential of irregular applications on the GPU. We present a work that investigates the feasibility of Malhotra, Pramodh Kumar and Maheshwari's "MPM" maximum flow algorithm on the GPU that achieves an average speedup of 8 when compared to a sequential CPU implementation.
Robust Hammerstein Adaptive Filtering under Maximum Correntropy Criterion
Zongze Wu
2015-10-01
Full Text Available The maximum correntropy criterion (MCC has recently been successfully applied to adaptive filtering. Adaptive algorithms under MCC show strong robustness against large outliers. In this work, we apply the MCC criterion to develop a robust Hammerstein adaptive filter. Compared with the traditional Hammerstein adaptive filters, which are usually derived based on the well-known mean square error (MSE criterion, the proposed algorithm can achieve better convergence performance especially in the presence of impulsive non-Gaussian (e.g., α-stable noises. Additionally, some theoretical results concerning the convergence behavior are also obtained. Simulation examples are presented to confirm the superior performance of the new algorithm.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Efficiency at maximum power of a discrete feedback ratchet
Jarillo, Javier; Tangarife, Tomás; Cao, Francisco J.
2016-01-01
Efficiency at maximum power is found to be of the same order for a feedback ratchet and for its open-loop counterpart. However, feedback increases the output power up to a factor of five. This increase in output power is due to the increase in energy input and the effective entropy reduction obtained as a consequence of feedback. Optimal efficiency at maximum power is reached for time intervals between feedback actions two orders of magnitude smaller than the characteristic time of diffusion over a ratchet period length. The efficiency is computed consistently taking into account the correlation between the control actions. We consider a feedback control protocol for a discrete feedback flashing ratchet, which works against an external load. We maximize the power output optimizing the parameters of the ratchet, the controller, and the external load. The maximum power output is found to be upper bounded, so the attainable extracted power is limited. After, we compute an upper bound for the efficiency of this isothermal feedback ratchet at maximum power output. We make this computation applying recent developments of the thermodynamics of feedback-controlled systems, which give an equation to compute the entropy reduction due to information. However, this equation requires the computation of the probability of each of the possible sequences of the controller's actions. This computation becomes involved when the sequence of the controller's actions is non-Markovian, as is the case in most feedback ratchets. We here introduce an alternative procedure to set strong bounds to the entropy reduction in order to compute its value. In this procedure the bounds are evaluated in a quasi-Markovian limit, which emerge when there are big differences between the stationary probabilities of the system states. These big differences are an effect of the potential strength, which minimizes the departures from the Markovianicity of the sequence of control actions, allowing also to
Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)
2009-02-19
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Marrani, Alessio; Riccioni, Fabio
2011-01-01
Starting from basic identities of the group E8, we perform progressive reductions, namely decompositions with respect to the maximal and symmetric embeddings of E7xSU(2) and then of E6xU(1). This procedure provides a systematic approach to the basic identities involving invariant primitive tensor structures of various irreprs. of finite-dimensional exceptional Lie groups. We derive novel identities for E7 and E6, highlighting the E8 origin of some well known ones. In order to elucidate the connections of this formalism to four-dimensional Maxwell-Einstein supergravity theories based on symmetric scalar manifolds (and related to irreducible Euclidean Jordan algebras, the unique exception being the triality-symmetric N = 2 stu model), we then derive a fundamental identity involving the unique rank-4 symmetric invariant tensor of the 0-brane charge symplectic irrepr. of U-duality groups, with potential applications in the quantization of the charge orbits of supergravity theories, as well as in the study of mult...
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
[Achievement of therapeutic objectives].
Mantilla, Teresa
2014-07-01
Therapeutic objectives for patients with atherogenic dyslipidemia are achieved by improving patient compliance and adherence. Clinical practice guidelines address the importance of treatment compliance for achieving objectives. The combination of a fixed dose of pravastatin and fenofibrate increases the adherence by simplifying the drug regimen and reducing the number of daily doses. The good tolerance, the cost of the combination and the possibility of adjusting the administration to the patient's lifestyle helps achieve the objectives for these patients with high cardiovascular risk. Copyright © 2014 Sociedad Española de Arteriosclerosis y Elsevier España, S.L. All rights reserved.
Adoptees' Educational Achievements
Olsen, Rikke Fuglsang
to country of origin. The results suggest that the relatively small gap between non-kin adoptees’ and non-adoptees’ educational achievements widens between ages 20 and 25. Moreover, the results show some differences in educational outcomes among non-kin adoptees with different countries of origin.......This study analyses educational achievement at age 20 for 3,180 non-kin adoptees and at age 25 for 1,559 non-kin adoptees in Denmark by comparing them to non-adoptees. The study also analyses whether there are within-group differences in the educational achievement of non-kin adoptees according...
Stellar chemical abundances: in pursuit of the highest achievable precision
Bedell, Megan; Bean, Jacob L. [Department of Astronomy and Astrophysics, University of Chicago, 5640 S. Ellis Avenue, Chicago, IL 60637 (United States); Meléndez, Jorge; Leite, Paulo [Departamento de Astronomia do IAG/USP, Universidade de São Paulo, Rua do Matão 1226, Cidade Universitária, São Paulo, SP 05508-900 (Brazil); Ramírez, Ivan [McDonald Observatory and Department of Astronomy, University of Texas at Austin, Austin, TX 78712-1206 (United States); Asplund, Martin, E-mail: mbedell@oddjob.uchicago.edu [Research School of Astronomy and Astrophysics, The Australian National University, Cotter Road, Weston, ACT 2611 (Australia)
2014-11-01
The achievable level of precision on photospheric abundances of stars is a major limiting factor on investigations of exoplanet host star characteristics, the chemical histories of star clusters, and the evolution of the Milky Way and other galaxies. While model-induced errors can be minimized through the differential analysis of spectrally similar stars, the maximum achievable precision of this technique has been debated. As a test, we derive differential abundances of 19 elements from high-quality asteroid-reflected solar spectra taken using a variety of instruments and conditions. We treat the solar spectra as being from unknown stars and use the resulting differential abundances, which are expected to be zero, as a diagnostic of the error in our measurements. Our results indicate that the relative resolution of the target and reference spectra is a major consideration, with use of different instruments to obtain the two spectra leading to errors up to 0.04 dex. Use of the same instrument at different epochs for the two spectra has a much smaller effect (∼0.007 dex). The asteroid used to obtain the solar standard also has a negligible effect (∼0.006 dex). Assuming that systematic errors from the stellar model atmospheres have been minimized, as in the case of solar twins, we confirm that differential chemical abundances can be obtained at sub-0.01 dex precision with due care in the observations, data reduction, and abundance analysis.
Harm Reduction as "Continuum Care" in Alcohol Abuse Disorder.
Maremmani, Icro; Cibin, Mauro; Pani, Pier Paolo; Rossi, Alessandro; Turchetti, Giuseppe
2015-11-19
Alcohol abuse is one of the most important risk factors for health and is a major cause of death and morbidity. Despite this, only about one-tenth of individuals with alcohol abuse disorders receive therapeutic intervention and specific rehabilitation. Among the various dichotomies that limit an effective approach to the problem of alcohol use disorder treatment, one of the most prominent is integrated treatment versus harm reduction. For years, these two divergent strategies have been considered to be opposite poles of different philosophies of intervention. One is bound to the search for methods that aim to lead the subject to complete abstinence; the other prioritizes a progressive decline in substance use, with maximum reduction in the damage that is correlated with curtailing that use. Reduction of alcohol intake does not require any particular setting, but does require close collaboration between the general practitioner, specialized services for addiction, alcohology services and psychiatry. In patients who reach that target, significant savings in terms of health and social costs can be achieved. Harm reduction is a desirable target, even from an economic point of view. At the present state of neuroscientific knowledge, it is possible to go one step further in the logic that led to the integration of psychosocial and pharmacological approaches, by attempting to remove the shadows of social judgment that, at present, are aiming for a course of treatment that is directed towards absolute abstention.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Improved Minimum Cuts and Maximum Flows in Undirected Planar Graphs
Italiano, Giuseppe F
2010-01-01
In this paper we study minimum cut and maximum flow problems on planar graphs, both in static and in dynamic settings. First, we present an algorithm that given an undirected planar graph computes the minimum cut between any two given vertices in O(n log log n) time. Second, we show how to achieve the same O(n log log n) bound for the problem of computing maximum flows in undirected planar graphs. To the best of our knowledge, these are the first algorithms for those two problems that break the O(n log n) barrier, which has been standing for more than 25 years. Third, we present a fully dynamic algorithm that is able to maintain information about minimum cuts and maximum flows in a plane graph (i.e., a planar graph with a fixed embedding): our algorithm is able to insert edges, delete edges and answer min-cut and max-flow queries between any pair of vertices in O(n^(2/3) log^3 n) time per operation. This result is based on a new dynamic shortest path algorithm for planar graphs which may be of independent int...
2011-01-01
Single-stage grid-connected Photovoltaic (PV) systems have advantages such as simple topology, high efficiency, etc. However, since all the control objectives such as the maximum power point tracking (with the utility voltage, and harmonics reduction for output current need to be considered simultaneously, the complexity of the control scheme is much increased. In this paper a new type of grid connected photovoltaic (PV) system with Maximum Power Point Tracking (MPPT) and reactive power simul...
Noise and physical limits to maximum resolution of PET images
Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2007-10-01
In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.
Eric R. Eide; Mark H. Showalter
2012-01-01
We explore the relationship between sleep and student performance on standardized tests. We model test scores as a nonlinear function of sleep, which allows us to compute the hours of sleep associated with maximum test scores. We refer to this as “optimal” hours of sleep. We also evaluate how the sleep and student performance relationship changes with age. We use the Panel Study of Income Dynamics-Child Development Supplement, which includes excellent control variables that are not usually av...
Reduction of turbomachinery noise
Waitz, Ian A. (Inventor); Brookfield, John M. (Inventor); Sell, Julian (Inventor); Hayden, Belva J. (Inventor); Ingard, K. Uno (Inventor)
1999-01-01
In the invention, propagating broad band and tonal acoustic components of noise characteristic of interaction of a turbomachine blade wake, produced by a turbomachine blade as the blade rotates, with a turbomachine component downstream of the rotating blade, are reduced. This is accomplished by injection of fluid into the blade wake through a port in the rotor blade. The mass flow rate of the fluid injected into the blade wake is selected to reduce the momentum deficit of the wake to correspondingly increase the time-mean velocity of the wake and decrease the turbulent velocity fluctuations of the wake. With this fluid injection, reduction of both propagating broad band and tonal acoustic components of noise produced by interaction of the blade wake with a turbomachine component downstream of the rotating blade is achieved. In a further noise reduction technique, boundary layer fluid is suctioned into the turbomachine blade through a suction port on the side of the blade that is characterized as the relatively low-pressure blade side. As with the fluid injection technique, the mass flow rate of the fluid suctioned into the blade is here selected to reduce the momentum deficit of the wake to correspondingly increase the time-mean velocity of the wake and decrease the turbulent velocity fluctuations of the wake; reduction of both propagating broad band and tonal acoustic components of noise produced by interaction of the blade wake with a turbomachine component downstream of the rotating blade is achieved with this suction technique. Blowing and suction techniques are also provided in the invention for reducing noise associated with the wake produced by fluid flow around a stationary blade upstream of a rotating turbomachine.
Karatsuba-Ofman Multiplier with Integrated Modular Reduction for GF(2m
CUEVAS-FARFAN, E.
2013-05-01
Full Text Available In this paper a novel GF(2m multiplier based on Karatsuba-Ofman Algorithm is presented. A binary field multiplication in polynomial basis is typically viewed as a two steps process, a polynomial multiplication followed by a modular reduction step. This research proposes a modification to the original Karatsuba-Ofman Algorithm in order to integrate the modular reduction inside the polynomial multiplication step. Modular reduction is achieved by using parallel linear feedback registers. The new algorithm is described in detail and results from a hardware implementation on FPGA technology are discussed. The hardware architecture is described in VHDL and synthesized for a Virtex-6 device. Although the proposed field multiplier can be implemented for arbitrary finite fields, the targeted finite fields are recommended for Elliptic Curve Cryptography. Comparing other KOA multipliers, our proposed multiplier uses 36% less area resources and improves the maximum delay in 10%.
The maximum intelligible range of the human voice
Boren, Braxton
This dissertation examines the acoustics of the spoken voice at high levels and the maximum number of people that could hear such a voice unamplified in the open air. In particular, it examines an early auditory experiment by Benjamin Franklin which sought to determine the maximum intelligible crowd for the Anglican preacher George Whitefield in the eighteenth century. Using Franklin's description of the experiment and a noise source on Front Street, the geometry and diffraction effects of such a noise source are examined to more precisely pinpoint Franklin's position when Whitefield's voice ceased to be intelligible. Based on historical maps, drawings, and prints, the geometry and material of Market Street is constructed as a computer model which is then used to construct an acoustic cone tracing model. Based on minimal values of the Speech Transmission Index (STI) at Franklin's position, Whitefield's on-axis Sound Pressure Level (SPL) at 1 m is determined, leading to estimates centering around 90 dBA. Recordings are carried out on trained actors and singers to determine their maximum time-averaged SPL at 1 m. This suggests that the greatest average SPL achievable by the human voice is 90-91 dBA, similar to the median estimates for Whitefield's voice. The sites of Whitefield's largest crowds are acoustically modeled based on historical evidence and maps. Based on Whitefield's SPL, the minimal STI value, and the crowd's background noise, this allows a prediction of the minimally intelligible area for each site. These yield maximum crowd estimates of 50,000 under ideal conditions, while crowds of 20,000 to 30,000 seem more reasonable when the crowd was reasonably quiet and Whitefield's voice was near 90 dBA.
Guillemot, Sylvain
2008-01-01
Given a set of leaf-labeled trees with identical leaf sets, the well-known "Maximum Agreement SubTree" problem (MAST) consists of finding a subtree homeomorphically included in all input trees and with the largest number of leaves. Its variant called "Maximum Compatible Tree" (MCT) is less stringent, as it allows the input trees to be refined. Both problems are of particular interest in computational biology, where trees encountered have often small degrees. In this paper, we study the parameterized complexity of MAST and MCT with respect to the maximum degree, denoted by D, of the input trees. It is known that MAST is polynomial for bounded D. As a counterpart, we show that the problem is W[1]-hard with respect to parameter D. Moreover, elying on recent advances in parameterized complexity we obtain a tight lower bound: while MAST can be solved in O(N^{O(D)}) time where N denotes the input length, we show that an O(N^{o(D)}) bound is not achievable, unless SNP is contained in SE. We also show that MCT is W[1...
Impact of Maximum Allowable Cost on CO2 Storage Capacity in Saline Formations.
Mathias, Simon A; Gluyas, Jon G; Goldthorpe, Ward H; Mackay, Eric J
2015-11-17
Injecting CO2 into deep saline formations represents an important component of many greenhouse-gas-reduction strategies for the future. A number of authors have posed concern over the thousands of injection wells likely to be needed. However, a more important criterion than the number of wells is whether the total cost of storing the CO2 is market-bearable. Previous studies have sought to determine the number of injection wells required to achieve a specified storage target. Here an alternative methodology is presented whereby we specify a maximum allowable cost (MAC) per ton of CO2 stored, a priori, and determine the corresponding potential operational storage capacity. The methodology takes advantage of an analytical solution for pressure build-up during CO2 injection into a cylindrical saline formation, accounting for two-phase flow, brine evaporation, and salt precipitation around the injection well. The methodology is applied to 375 saline formations from the U.K. Continental Shelf. Parameter uncertainty is propagated using Monte Carlo simulation with 10 000 realizations for each formation. The results show that MAC affects both the magnitude and spatial distribution of potential operational storage capacity on a national scale. Different storage prospects can appear more or less attractive depending on the MAC scenario considered. It is also shown that, under high well-injection rate scenarios with relatively low cost, there is adequate operational storage capacity for the equivalent of 40 years of U.K. CO2 emissions.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
[Breast dose reduction in female CT screening for lung cancer using various metallic shields].
Takada, Kenta; Kaneko, Junichi; Aoki, Kiyoshi
2009-12-20
We evaluated the effectiveness of metallic shields that were used for reduction of the breast dose in thoracic computed tomography(CT). For the evaluation, we measured breast surface dose and image standard deviation(SD)in the lung area. The metallic shields were made from bismuth, zinc, copper, and iron. The bismuth shield has been marketed and used for dose reduction. The other three metallic shields were chosen because they have lower atomic numbers and a lower yield of characteristic X-rays. As a result, use of the metallic shields showed a lower breast dose than the decrement of the tube current in the same image SD. The insertion of a thin aluminum sheet between the shield and a phantom was also effective in reducing breast surface dose. We calculated the dose reduction rate to evaluate the effectiveness of these metallic shields. This dose reduction rate was defined as the ratio of the decrease in breast surface dose by metallic shields to the breast surface dose measured with the tube current decrement in the same image SD. The maximum dose reduction rate was 6.4% for the bismuth shield, and 12.0-13.3% for the other shields. These results indicate that the shields made from zinc, copper, and iron are more effective for dose reduction than the shield made form bismuth. The best dose reduction rate, 13.3%, has been achieved when the zinc shield placed 20 mm apart from a phantom with 0.2 mm aluminum was used.
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
2010-01-01
International audience; A drill-string is a slender structure that drills rock to search for oil. The nonlinear interaction between the bit and the rock is of great importance for the drill-string dynamics. The interaction model has uncertainties, which are modeled using the nonparametric probabilistic approach. This paper deals with a procedure to perform the identification of the dispersion parameter of the probabilistic model of uncertainties of a bit-rock interaction model. The bit-rock i...
Disparity in academic achievement
User
female and male students in colleges of teachers' education in Oromia, and to identify variables attributing to ... and social services (Dereje, Dawit & ... related to student attitudes (motivation; .... community commitment to children's ... and getting involved in social and academic .... study, and gender patterns in achievement.
Ohrn, Deborah Gore, Ed.
1993-01-01
This issue of the Goldfinch highlights some of Iowa's 20th century women of achievement. These women have devoted their lives to working for human rights, education, equality, and individual rights. They come from the worlds of politics, art, music, education, sports, business, entertainment, and social work. They represent Native Americans,…
Assessing Handwriting Achievement.
Ediger, Marlow
Teachers in the school setting need to emphasize quality handwriting across the curriculum. Quality handwriting means that the written content is easy to read in either manuscript or cursive form. Handwriting achievement can be assessed, but not compared to the precision of assessing basic addition, subtraction, multiplication, and division facts.…
Bracey, Gerald W.
2008-01-01
In his "Wall Street Journal" op-ed on the 25th of anniversary of "A Nation At Risk", former assistant secretary of education Chester E. Finn Jr. applauded the report for turning U.S. education away from equality and toward achievement. It was not surprising, then, that in mid-2008, Finn arranged a conference to examine the…
Cognitive Processes and Achievement.
Hunt, Dennis; Randhawa, Bikkar S.
For a group of 165 fourth- and fifth-grade students, four achievement test scores were correlated with success on nine tests designed to measure three cognitive functions: sustained attention, successive processing, and simultaneous processing. This experiment was designed in accordance with Luria's model of the three functional units of the…
Cognitive Processes and Achievement.
Hunt, Dennis; Randhawa, Bikkar S.
For a group of 165 fourth- and fifth-grade students, four achievement test scores were correlated with success on nine tests designed to measure three cognitive functions: sustained attention, successive processing, and simultaneous processing. This experiment was designed in accordance with Luria's model of the three functional units of the…
Bracey, Gerald W.
2008-01-01
In his "Wall Street Journal" op-ed on the 25th of anniversary of "A Nation At Risk", former assistant secretary of education Chester E. Finn Jr. applauded the report for turning U.S. education away from equality and toward achievement. It was not surprising, then, that in mid-2008, Finn arranged a conference to examine the…
Test images for the maximum entropy image restoration method
Mackey, James E.
1990-01-01
One of the major activities of any experimentalist is data analysis and reduction. In solar physics, remote observations are made of the sun in a variety of wavelengths and circumstances. In no case is the data collected free from the influence of the design and operation of the data gathering instrument as well as the ever present problem of noise. The presence of significant noise invalidates the simple inversion procedure regardless of the range of known correlation functions. The Maximum Entropy Method (MEM) attempts to perform this inversion by making minimal assumptions about the data. To provide a means of testing the MEM and characterizing its sensitivity to noise, choice of point spread function, type of data, etc., one would like to have test images of known characteristics that can represent the type of data being analyzed. A means of reconstructing these images is presented.
Higuita Cano, Mauricio; Mousli, Mohamed Islam Aniss; Kelouwani, Sousso; Agbossou, Kodjo; Hammoudi, Mhamed; Dubé, Yves
2017-03-01
This work investigates the design and validation of a fuel cell management system (FCMS) which can perform when the fuel cell is at water freezing temperature. This FCMS is based on a new tracking technique with intelligent prediction, which combined the Maximum Efficiency Point Tracking with variable perturbation-current step and the fuzzy logic technique (MEPT-FL). Unlike conventional fuel cell control systems, our proposed FCMS considers the cold-weather conditions, the reduction of fuel cell set-point oscillations. In addition, the FCMS is built to respond quickly and effectively to the variations of electric load. A temperature controller stage is designed in conjunction with the MEPT-FL in order to operate the FC at low-temperature values whilst tracking at the same time the maximum efficiency point. The simulation results have as well experimental validation suggest that propose approach is effective and can achieve an average efficiency improvement up to 8%. The MEPT-FL is validated using a Proton Exchange Membrane Fuel Cell (PEMFC) of 500 W.
Maćkowiak, Mariusz; Kątowski, Piotr
1996-06-01
Two-dimensional zero-field nutation NQR spectroscopy has been used to determine the full quadrupolar tensor of spin - 3/2 nuclei in serveral molecular crystals containing the 3 5 Cl and 7 5 As nuclei. The problems of reconstructing 2D-nutation NQR spectra using conventional methods and the advantages of using implementation of the maximum entropy method (MEM) are analyzed. It is shown that the replacement of conventional Fourier transform by an alternative data processing by MEM in 2D NQR spectroscopy leads to sensitivity improvement, reduction of instrumental artefacts and truncation errors, shortened data acquisition times and suppression of noise, while at the same time increasing the resolution. The effects of off-resonance irradiation in nutation experiments are demonstrated both experimentally and theoretically. It is shown that off-resonance nutation spectroscopy is a useful extension of the conventional on-resonance experiments, thus facilitating the determination of asymmetry parameters in multiple spectrum. The theoretical description of the off-resonance effects in 2D nutation NQR spectroscopy is given, and general exact formulas for the asymmetry parameter are obtained. In off-resonance conditions, the resolution of the nutation NQR spectrum decreases with the spectrometer offset. However, an enhanced resolution can be achieved by using the maximum entropy method in 2D-data reconstruction.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
School Segregation and Racial Academic Achievement Gaps
Sean F. Reardon
2016-09-01
Full Text Available Although it is clear that racial segregation is linked to academic achievement gaps, the mechanisms underlying this link have been debated since James Coleman published his eponymous 1966 report. In this paper, I examine sixteen distinct measures of segregation to determine which is most strongly associated with academic achievement gaps. I find clear evidence that one aspect of segregation in particular—the disparity in average school poverty rates between white and black students’ schools—is consistently the single most powerful correlate of achievement gaps, a pattern that holds in both bivariate and multivariate analyses. This implies that high-poverty schools are, on average, much less effective than lower-poverty schools and suggests that strategies that reduce the differential exposure of black, Hispanic, and white students to poor schoolmates may lead to meaningful reductions in academic achievement gaps.
Tang, Haolin; Cai, Shichang; Xie, Shilei; Wang, Zhengbang; Tong, Yexiang; Pan, Mu; Lu, Xihong
2016-02-01
A new class of dual metal and N doped carbon catalysts with well-defined porous structure derived from metal-organic frameworks (MOFs) has been developed as a high-performance electrocatalyst for oxygen reduction reaction (ORR). Furthermore, the microbial fuel cell (MFC) device based on the as-prepared Ni/Co and N codoped carbon as air cathode catalyst achieves a maximum power density of 4335.6 mW m(-2) and excellent durability.
Accurate Maximum Power Tracking in Photovoltaic Systems Affected by Partial Shading
Pierluigi Guerriero
2015-01-01
Full Text Available A maximum power tracking algorithm exploiting operating point information gained on individual solar panels is presented. The proposed algorithm recognizes the presence of multiple local maxima in the power voltage curve of a shaded solar field and evaluates the coordinated of the absolute maximum. The effectiveness of the proposed approach is evidenced by means of circuit level simulation and experimental results. Experiments evidenced that, in comparison with a standard perturb and observe algorithm, we achieve faster convergence in normal operating conditions (when the solar field is uniformly illuminated and we accurately locate the absolute maximum power point in partial shading conditions, thus avoiding the convergence on local maxima.
Boolean functions of an odd number of variables with maximum algebraic immunity
LI Na; QI WenFeng
2007-01-01
In this paper, we study Boolean functions of an odd number of variables with maximum algebraic immunity, We identify three classes of such functions, and give some necessary conditions of such functions, which help to examine whether a Boolean function of an odd number of variables has the maximum algebraic immunity. Further, some necessary conditions for such functions to have also higher nonlinearity are proposed, and a class of these functions are also obtained. Finally,we present a sufficient and necessary condition for Boolean functions of an odd number of variables to achieve maximum algebraic immunity and to be also 1-resilient.
Achieveing Organizational Excellence Through
Mehdi Abzari
2009-04-01
Full Text Available AbstractToday, In order to create motivation and desirable behavior in employees, to obtain organizational goals,to increase human resources productivity and finally to achieve organizational excellence, top managers oforganizations apply new and effective strategies. One of these strategies to achieve organizational excellenceis creating desirable corporate culture. This research has been conducted to identify the path to reachorganizational excellence by creating corporate culture according to the standards and criteria oforganizational excellence. The result of the so-called research is this paper in which researchers foundtwenty models and components of corporate culture and based on the Industry, organizational goals andEFQM model developed a model called "The Eskimo model of Culture-Excellence". The method of theresearch is survey and field study and the questionnaires were distributed among 116 managers andemployees. To assess the reliability of questionnaires, Cronbach alpha was measured to be 95% in the idealsituation and 0/97 in the current situation. Systematic sampling was done and in the pre-test stage 45questionnaires were distributed. A comparison between the current and the ideal corporate culture based onthe views of managers and employees was done and finally it has been concluded that corporate culture isthe main factor to facilitate corporate excellence and success in order to achieve organizational effectiveness.The contribution of this paper is that it proposes a localized –applicable model of corporate excellencethrough reinforcing corporate culture.
2012-09-13
... Paperwork Reduction Act (44 U.S.C. 3501 et seq.); Is certified as not having a significant economic impact... into the new Missouri rule include: --10 CSR 10-2.040, Maximum Allowable Emission of Particulate Matter from Fuel Burning Equipment Used for Indirect Heating, for the Kansas City Metropolitan Area; --10 CSR...
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
An approximate, maximum-terminal-velocity descent to a point
Eisler, G. Richard; Hull, David G.
A neighboring extremal control problem is formulated for a hypersonic glider to execute a maximum-terminal-velocity descent to a stationary target in a vertical plane. The resulting two-part, feedback control scheme initially solves a nonlinear algebraic problem to generate a nominal trajectory to the target altitude. Secondly, quadrature about the nominal provides the lift perturbation necessary to achieve the target downrange. On-line feedback simulations are run for the proposed scheme and a form of proportional navigation and compared with an off-line parameter optimization method. The neighboring extremal terminal velocity compares very well with the parameter optimization solution and is far superior to proportional navigation. However, the update rate is degraded, though the proposed method can be executed in real time.
Maximum Power Point Tracking of Photovoltaic System Using Intelligent Controller
Swathy C.S
2013-04-01
Full Text Available Photovoltaic systems normally use a maximum power point tracking (MPPT technique to continuously give forth the highest probable power to the load when the temperature and solar irradiationchanges occur. This subdues the problem of mismatch between the given load and the solar array. The energy conservation principle is used to obtain small signal model and transfer function. A simulationwork handling with MPPT controller, a DC/DC boost converter feeding a load is achieved. PI controller and fuzzy logic controllers were used as the MPPT controller, which controls the dc/dc converter. Simulations and experimental results showed excellent performance and were used for comparing PI controller and fuzzy logic controller.
Achievable Precision for Optical Ranging Systems
Moision, Bruce; Erkmen, Baris I.
2012-01-01
Achievable RMS errors in estimating the phase, frequency, and intensity of a direct-detected intensity-modulated optical pulse train are presented. For each parameter, the Cramer-Rao-Bound (CRB) is derived and the performance of the Maximum Likelihood estimator is illustrated. Approximations to the CRBs are provided, enabling an intuitive understanding of estimator behavior as a function of the signaling parameters. The results are compared to achievable RMS errors in estimating the same parameters from a sinusoidal waveform in additive white Gaussian noise. This establishes a framework for a performance comparison of radio frequency (RF) and optical science. Comparisons are made using parameters for state-of-the-art deep-space RF and optical links. Degradations to the achievable errors due to clock phase noise and detector jitter are illustrated.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Maximum Velocities in Flexion and Extension Actions for Sport
Jessop David M.
2016-04-01
Full Text Available Speed of movement is fundamental to the outcome of many human actions. A variety of techniques can be implemented in order to maximise movement speed depending on the goal of the movement, constraints, and the time available. Knowing maximum movement velocities is therefore useful for developing movement strategies but also as input into muscle models. The aim of this study was to determine maximum flexion and extension velocities about the major joints in upper and lower limbs. Seven university to international level male competitors performed flexion/extension at each of the major joints in the upper and lower limbs under three conditions: isolated; isolated with a countermovement; involvement of proximal segments. 500 Hz planar high speed video was used to calculate velocities. The highest angular velocities in the upper and lower limb were 50.0 rad·s-1 and 28.4 rad·s-1, at the wrist and knee, respectively. As was true for most joints, these were achieved with the involvement of proximal segments, however, ANOVA analysis showed few significant differences (p<0.05 between conditions. Different segment masses, structures and locations produced differing results, in the upper and lower limbs, highlighting the requirement of segment specific strategies for maximal movements.
Measurement and relevance of maximum metabolic rate in fishes.
Norin, T; Clark, T D
2016-01-01
Maximum (aerobic) metabolic rate (MMR) is defined here as the maximum rate of oxygen consumption (M˙O2max ) that a fish can achieve at a given temperature under any ecologically relevant circumstance. Different techniques exist for eliciting MMR of fishes, of which swim-flume respirometry (critical swimming speed tests and burst-swimming protocols) and exhaustive chases are the most common. Available data suggest that the most suitable method for eliciting MMR varies with species and ecotype, and depends on the propensity of the fish to sustain swimming for extended durations as well as its capacity to simultaneously exercise and digest food. MMR varies substantially (>10 fold) between species with different lifestyles (i.e. interspecific variation), and to a lesser extent (aerobic scope, interest in measuring this trait has spread across disciplines in attempts to predict effects of climate change on fish populations. Here, various techniques used to elicit and measure MMR in different fish species with contrasting lifestyles are outlined and the relevance of MMR to the ecology, fitness and climate change resilience of fishes is discussed.
Public health impact of salt reduction
Hendriksen, M.A.H.
2015-01-01
The health and economic burden related to cardiovascular diseases is substantial and prevention of these diseases remains a challenge. There is convincing evidence that high salt intake affects blood pressure and the risk of cardiovascular diseases. As salt intake is far above the recommended maximum level of intake, salt reduction may help to reduce cardiovascular disease incidence. However, the effect of salt reduction initiatives on intake levels and long-term health is largely unknown. Th...
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Turner, Eve; Hawkins, Peter
2016-01-01
.... This article presents the results and implications of an international study which explored its use in executive and business coaching, with the aim of sharing best practice and achieving maximum...
Radaydeh, Redha Mahmoud
2014-02-01
This paper studies generalized single-stream transmit beamforming employing receive array co-channel interference reduction algorithms under slow and flat fading multiuser wireless systems. The impact of imperfect prediction of channel state information for the desired user spatially uncorrelated transmit channels on the effectiveness of transmit beamforming for different interference reduction techniques is investigated. The case of over-loaded receive array with closely-spaced elements is considered, wherein it can be configured to specified interfering sources. Both dominant interference reduction and adaptive interference reduction techniques for statistically ordered and unordered interferers powers, respectively, are thoroughly studied. The effect of outdated statistical ordering of the interferers powers on the efficiency of dominant interference reduction is studied and then compared against the adaptive interference reduction. For the system models described above, new analytical formulations for the statistics of combined signal-to-interference-plus-noise ratio are presented, from which results for conventional maximum ratio transmission and single-antenna best transmit selection can be directly deduced as limiting cases. These results are then utilized to obtain quantitative measures for various performance metrics. They are also used to compare the achieved performance of various configuration models under consideration. © 1972-2012 IEEE.
In Situ Rates of Sulfate Reduction in Response to Geochemical Perturbations
Kneeshaw, T.A.; McGuire, J.T.; Cozzarelli, I.M.; Smith, E.W.
2011-01-01
Rates of in situ microbial sulfate reduction in response to geochemical perturbations were determined using Native Organism Geochemical Experimentation Enclosures (NOGEEs), a new in situ technique developed to facilitate evaluation of controls on microbial reaction rates. NOGEEs function by first trapping a native microbial community in situ and then subjecting it to geochemical perturbations through the introduction of various test solutions. On three occasions, NOGEEs were used at the Norman Landfill research site in Norman, Oklahoma, to evaluate sulfate-reduction rates in wetland sediments impacted by landfill leachate. The initial experiment, in May 2007, consisted of five introductions of a sulfate test solution over 11 d. Each test stimulated sulfate reduction with rates increasing until an apparent maximum was achieved. Two subsequent experiments, conducted in October 2007 and February 2008, evaluated the effects of concentration on sulfate-reduction rates. Results from these experiments showed that faster sulfate-reduction rates were associated with increased sulfate concentrations. Understanding variability in sulfate-reduction rates in response to perturbations may be an important factor in predicting rates of natural attenuation and bioremediation of contaminants in systems not at biogeochemical equilibrium. Copyright ?? 2011 The Author(s). Journal compilation ?? 2011 National Ground Water Association.
Bilingualism and academic achievement.
Han, Wen-Jui
2012-01-01
Using the Early Childhood Longitudinal Study, Kindergarten Cohort, this study examines the role that bilingualism plays in children's academic developmental trajectories during their early school years, with particular attention on the school environment (N = 16,380). Growth-curve results showed that despite starting with lower math scores in kindergarten, Mixed Bilingual children fully closed the math gap with their White English Monolingual peers by fifth grade. However, because non-English-Dominant Bilinguals and non-English Monolinguals started kindergarten with significantly lower reading and math scores compared to their English Monolingual peers, by fifth grade the former groups still had significantly lower scores. School-level factors explained about one third of the reductions in the differences in children's academic performance.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Achieving form in autobiography
Nicholas (Nick Meihuizen
2014-06-01
Full Text Available This article argues that, unlike biographies which tend to follow patterns based on conventional expectations, salient autobiographies achieve forms unique to themselves. The article draws on ideas from contemporary formalists such as Peter McDonald and Angela Leighton but also considers ideas on significant form stemming from earlier writers and critics such as P.N. Furbank and Willa Cather. In extracting from these writers the elements of what they consider comprise achieved form, the article does not seek to provide a rigid means of objectively testing the formal attributes of a piece of writing. It rather offers qualitative reminders of the need to be alert to the importance of form, even if the precise nature of this importance is not possible to define. Form is involved in meaning, and this continuously opens up possibilities regarding the reader’s relationship with the work in question. French genetic critic Debray Genette distinguishes between ‘semantic effect’ (the direct telling involved in writing and ‘semiological effect’ (the indirect signification involved. It is the latter, the article argues in summation, which gives a work its singular nature, producing a form that is not predictable but suggestive, imaginative.
Y. Haseli
2016-05-01
Full Text Available The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov’s engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
Probable Maximum Earthquake Magnitudes for the Cascadia Subduction
Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.
2013-12-01
The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc
Mould, Richard F.; Asselain, Bernard; DeRycke, Yann
2004-03-01
For breast cancer where the prognosis of early stage disease is very good and even when local recurrences do occur they can present several years after treatment, the hospital resources required for annual follow-up examinations of what can be several hundreds of patients are financially significant. If, therefore, there is some method to estimate a maximum length of follow-up Tmax necessary, then cost savings of physicians' time as well as outpatient workload reductions can be achieved. In modern oncology where expenses continue to increase exponentially due to staff salaries and the expense of chemotherapy drugs and of new treatment and imaging technology, the economic situation can no longer be ignored. The methodology of parametric modelling, based on the lognormal distribution is described, showing that useful estimates for Tmax can be made, by making a trade-off between Tmax and the fraction of patients who will experience a delay in detection of their local recurrence. This trade-off depends on the chosen tail of the lognormal. The methodology is described for stage T1 and T2 breast cancer and it is found that Tmax = 4 years which is a significant reduction on the usual maximum of 10 years of follow-up which is employed by many hospitals for breast cancer patients. The methodology is equally applicable for cancers at other sites where the prognosis is good and some local recurrences may not occur until several years post-treatment.
MAXIMUM INFORMATION AND OPTIMUM ESTIMATING FUNCTION
林路
2003-01-01
In order to construct estimating functions in some parametric models, this paper introducestwo classes of information matrices. Some necessary and sufficient conditions for the informationmatrices achieving their upper bounds are given. For the problem of estimating the median,some optimum estimating functions based on the information matrices are acquired. Undersome regularity conditions, an approach to carrying out the best basis function is introduced. Innonlinear regression models, an optimum estimating function based on the information matricesis obtained. Some examples are given to illustrate the results. Finally, the concept of optimumestimating function and the methods of constructing optimum estimating function are developedin more general statistical models.
Achieving English Spoken Fluency
王鲜杰
2000-01-01
Language is first and foremost oral,spoken language,speaking skill is the most important one of the four skills(L,S,R,W)and also it is the most difficult one of the four skills. To have an all-round command of a language one must be able to speak and to understand the spoken language, it is not enough for a language learner only to have a good reading and writing skills. As Englisn language teachers, we need to focus on improving learners' English speaking skill to meet the need of our society and our country and provide learner some useful techniques to achieving their English spoken fluency. This paper focuses on the spoken how to improving learners speaking skill.
Achieving diagnosis by consensus
Kane, Bridget
2009-08-01
This paper provides an analysis of the collaborative work conducted at a multidisciplinary medical team meeting, where a patient’s definitive diagnosis is agreed, by consensus. The features that distinguish this process of diagnostic work by consensus are examined in depth. The current use of technology to support this collaborative activity is described, and experienced deficiencies are identified. Emphasis is placed on the visual and perceptual difficulty for individual specialities in making interpretations, and on how, through collaboration in discussion, definitive diagnosis is actually achieved. The challenge for providing adequate support for the multidisciplinary team at their meeting is outlined, given the multifaceted nature of the setting, i.e. patient management, educational, organizational and social functions, that need to be satisfied.
Recognizing outstanding achievements
Speiss, Fred
One function of any professional society is to provide an objective, informed means for recognizing outstanding achievements in its field. In AGU's Ocean Sciences section we have a variety of means for carrying out this duty. They include recognition of outstanding student presentations at our meetings, dedication of special sessions, nomination of individuals to be fellows of the Union, invitations to present Sverdrup lectures, and recommendations for Macelwane Medals, the Ocean Sciences Award, and the Ewing Medal.Since the decision to bestow these awards requires initiative and judgement by members of our section in addition to a deserving individual, it seems appropriate to review the selection process for each and to urge you to identify those deserving of recognition.
Bradburne, John; Patton, Tisha C.
2001-02-25
When Fluor Fernald took over the management of the Fernald Environmental Management Project in 1992, the estimated closure date of the site was more than 25 years into the future. Fluor Fernald, in conjunction with DOE-Fernald, introduced the Accelerated Cleanup Plan, which was designed to substantially shorten that schedule and save taxpayers more than $3 billion. The management of Fluor Fernald believes there are three fundamental concerns that must be addressed by any contractor hoping to achieve closure of a site within the DOE complex. They are relationship management, resource management and contract management. Relationship management refers to the interaction between the site and local residents, regulators, union leadership, the workforce at large, the media, and any other interested stakeholder groups. Resource management is of course related to the effective administration of the site knowledge base and the skills of the workforce, the attraction and retention of qualified a nd competent technical personnel, and the best recognition and use of appropriate new technologies. Perhaps most importantly, resource management must also include a plan for survival in a flat-funding environment. Lastly, creative and disciplined contract management will be essential to effecting the closure of any DOE site. Fluor Fernald, together with DOE-Fernald, is breaking new ground in the closure arena, and ''business as usual'' has become a thing of the past. How Fluor Fernald has managed its work at the site over the last eight years, and how it will manage the new site closure contract in the future, will be an integral part of achieving successful closure at Fernald.
Scenario analysis to vehicular emission reduction in Beijing-Tianjin-Hebei (BTH) region, China.
Guo, Xiurui; Fu, Liwei; Ji, Muse; Lang, Jianlei; Chen, Dongsheng; Cheng, Shuiyuan
2016-09-01
Motor vehicle emissions are increasingly becoming one of the important factors affecting the urban air quality in China. It is necessary and useful to policy makers to demonstrate the situation given the relevant pollutants reduction measures are taken. This paper predicted the reduction potentials of conventional pollutants (PM10, NOx, CO, HC) under different control strategies and policies in the Beijing-Tianjin-Hebei (BTH) region during 2011-2020. There are the baseline and 5 control scenarios designed, which presented the different current and future possible vehicular emissions control measures. Future population of different kinds of vehicles were predicted based on the Gompertz model, and vehicle kilometers travelled estimated as well. After that, the emissions reduction under the different scenarios during 2011-2020 could be estimated using emission factors and activity level data. The results showed that, the vehicle population in the BTH region would continue to grow up, especially in Tianjin and Hebei. Comparing the different scenarios, emission standards updating scenario would achieve a substantial reduction and keep rising up for all the pollutants, and the scenario of eliminating high-emission vehicles can reduce emissions more effectively in short-term than in long-term, especially in Beijing. Due to the constraints of existing economical and technical level, the reduction effect of promoting new energy vehicles would not be significant, especially given the consideration of their lifetime impact. The reduction effect of population regulation scenario in Beijing cannot be ignorable and would keep going up for PM10, CO and HC, excluding NOx. Under the integrated scenario considering all the control measures it would achieve the maximum reduction potential of emissions, which means to reduce emissions of PM10, NOx, CO, HC, by 56%, 59%, 48%, 52%, respectively, compared to BAU scenario for the whole BTH region in 2020.
Lymphedema Risk Reduction Practices
... now! Position Paper: Lymphedema Risk Reduction Practices Category: Position Papers Tags: Risks Archives Treatment risk reduction garments surgery obesity infection blood pressure trauma morbid obesity body weight ...
Membraneless laminar flow cell for electrocatalytic CO2 reduction with liquid product separation
Monroe, Morgan M.; Lobaccaro, Peter; Lum, Yanwei; Ager, Joel W.
2017-04-01
The production of liquid fuel products via electrochemical reduction of CO2 is a potential path to produce sustainable fuels. However, to be practical, a separation strategy is required to isolate the fuel-containing electrolyte produced at the cathode from the anode and also prevent the oxidation products (i.e. O2) from reaching the cathode. Ion-conducting membranes have been applied in CO2 reduction reactors to achieve this separation, but they represent an efficiency loss and can be permeable to some product species. An alternative membraneless approach is developed here to maintain product separation through the use of a laminar flow cell. Computational modelling shows that near-unity separation efficiencies are possible at current densities achievable now with metal cathodes via optimization of the spacing between the electrodes and the electrolyte flow rate. Laminar flow reactor prototypes were fabricated with a range of channel widths by 3D printing. CO2 reduction to formic acid on Sn electrodes was used as the liquid product forming reaction, and the separation efficiency for the dissolved product was evaluated with high performance liquid chromatography. Trends in product separation efficiency with channel width and flow rate were in qualitative agreement with the model, but the separation efficiency was lower, with a maximum value of 90% achieved.
Achievement Goals and Achievement Emotions: A Meta-Analysis
Huang, Chiungjung
2011-01-01
This meta-analysis synthesized 93 independent samples (N = 30,003) in 77 studies that reported in 78 articles examining correlations between achievement goals and achievement emotions. Achievement goals were meaningfully associated with different achievement emotions. The correlations of mastery and mastery approach goals with positive achievement…
Matsumoto, Atsushi; Hasegawa, Masaru; Matsui, Keiju
In this paper, a novel position sensorless control method for interior permanent magnet synchronous motors (IPMSMs) that is based on a novel flux model suitable for maximum torque control has been proposed. Maximum torque per ampere (MTPA) control is often utilized for driving IPMSMs with the maximum efficiency. In order to implement this control, generally, the parameters are required to be accurate. However, the inductance varies dramatically because of magnetic saturation, which has been one of the most important problems in recent years. Therefore, the conventional MTPA control method fails to achieve maximum efficiency for IPMSMs because of parameter mismatches. In this paper, first, a novel flux model has been proposed for realizing the position sensorless control of IPMSMs, which is insensitive to Lq. In addition, in this paper, it has been shown that the proposed flux model can approximately estimate the maximum torque control (MTC) frame, which as a new coordinate aligned with the current vector for MTPA control. Next, in this paper, a precise estimation method for the MTC frame has been proposed. By this method, highly accurate maximum torque control can be achieved. A decoupling control algorithm based on the proposed model has also been addressed in this paper. Finally, some experimental results demonstrate the feasibility and effectiveness of the proposed method.
Sulfate reduction in freshwater peatlands
Oequist, M.
1996-12-31
This text consist of two parts: Part A is a literature review on microbial sulfate reduction with emphasis on freshwater peatlands, and part B presents the results from a study of the relative importance of sulfate reduction and methane formation for the anaerobic decomposition in a boreal peatland. The relative importance of sulfate reduction and methane production for the anaerobic decomposition was studied in a small raised bog situated in the boreal zone of southern Sweden. Depth distribution of sulfate reduction- and methane production rates were measured in peat sampled from three sites (A, B, and C) forming an minerotrophic-ombrotrophic gradient. SO{sub 4}{sup 2-} concentrations in the three profiles were of equal magnitude and ranged from 50 to 150 {mu}M. In contrast, rates of sulfate reduction were vastly different: Maximum rates in the three profiles were obtained at a depth of ca. 20 cm below the water table. In A it was 8 {mu}M h{sup -1} while in B and C they were 1 and 0.05 {mu}M h{sup -1}, respectively. Methane production rates, however, were more uniform across the three nutrient regimes. Maximum rates in A (ca. 1.5 {mu}g d{sup -1} g{sup -1}) were found 10 cm below the water table, in B (ca. 1.0 {mu}g d{sup -1} g{sup -1}) in the vicinity of the water table, and in C (0.75 {mu}g d{sup -1} g{sup -1}) 20 cm below the water table. In all profiles both sulfate reduction and methane production rates were negligible above the water table. The areal estimates of methane production for the profiles were 22.4, 9.0 and 6.4 mmol m{sup -2} d{sup -1}, while the estimates for sulfate reduction were 26.4, 2.5, and 0.1 mmol m{sup -2} d{sup -1}, respectively. The calculated turnover times at the sites were 1.2, 14.2, and 198.7 days, respectively. The study shows that sulfate reducing bacteria are important for the anaerobic degradation in the studied peatland, especially in the minerotrophic sites, while methanogenic bacteria dominate in ombrotrophic sites Examination
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kiviet, J.F.; Phillips, G.D.A.
2014-01-01
In dynamic regression models conditional maximum likelihood (least-squares) coefficient and variance estimators are biased. Using expansion techniques an approximation is obtained to the bias in variance estimation yielding a bias corrected variance estimator. This is achieved for both the standard
The maximum efficiency of nano heat engines depends on more than temperature
Woods, Mischa; Ng, Nelly; Wehner, Stephanie
Sadi Carnot's theorem regarding the maximum efficiency of heat engines is considered to be of fundamental importance in the theory of heat engines and thermodynamics. Here, we show that at the nano and quantum scale, this law needs to be revised in the sense that more information about the bath other than its temperature is required to decide whether maximum efficiency can be achieved. In particular, we derive new fundamental limitations of the efficiency of heat engines at the nano and quantum scale that show that the Carnot efficiency can only be achieved under special circumstances, and we derive a new maximum efficiency for others. A preprint can be found here arXiv:1506.02322 [quant-ph] Singapore's MOE Tier 3A Grant & STW, Netherlands.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Vietnam: achievements and challenges.
Tran Tien Duc
1999-01-01
The Vietnamese Government's successful development of the National Population and Family Planning Program has contributed in raising people's awareness on population issues and changing their attitudes and behavior regarding fostering small families. It has also been found to be very effective in substantially decreasing fertility level. In addition, economic levels of many households have been greatly improved since the adoption of a renovation policy. The advancement of welfare accompanied by the provision of better basic social services, including health services, has boost people's health. Several factors behind the achievements of the National Population and Family Planning Program include: 1) Strengthening of the political commitment of national and local leaders; 2) Nationwide mobilization of mass organizations and NGOs; 3) A strong advocacy and information, education and communication program; 4) Provision of various kinds of contraceptives; 5) Effective management of the program by priority; and 6) Support of the international community. Despite such successes, Vietnam is facing a number of new issues such as enlargement of the work force, shifting migration patterns and accelerating urbanization, aging of population, and change of household structure. Nevertheless, the Government of Vietnam is preparing a New Population Strategy aimed to address these issues.
Relate the earthquake parameters to the maximum tsunami runup
Sharghivand, Naeimeh; Kânoǧlu, Utku
2016-04-01
relate earthquake parameters to the maximum runup. Further, we also present the effect of earthquake parameters on the focusing phenomena, which is introduced by Kanoglu et al. (2013). Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe).
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Achieving Millennium Development Goals 4 and 5 in Bangladesh.
Chowdhury, S; Banu, L A; Chowdhury, T A; Rubayet, S; Khatoon, S
2011-09-01
Bangladesh has made commendable progress in achieving Millennium Development Goals (MDGs) 4 and 5. Since 1990, there has been a remarkable reduction in maternal and child mortality, with an estimated 57% reduction in child mortality and 66% in maternal mortality. This review highlights that, whereas Bangladesh is on track for achieving MDG 4 and 5A, progress in universal access to reproductive health (5B) is not yet at the required pace to achieve the targets set for 2015. In addition, Bangladesh needs to further enhance activities to improve newborn health and promote skilled attendance at birth. © 2011 The Authors BJOG An International Journal of Obstetrics and Gynaecology © 2011 RCOG.
Shirai; Setsuko
2009-01-01
This paper reports the result that vowel reduction occurs in Japanese and vowel reduction is the part of the language universality.Compared with English,the effect of the vowel reduction in Japanese is relatively weak might because of the absence of stress in Japanese.Since spectral vowel reduction occurs in Japanese,various types of researches would be possible.
Infinitary Combinatory Reduction Systems: Normalising Reduction Strategies
Ketema, Jeroen; Simonsen, Jakob Grue
2010-01-01
We study normalising reduction strategies for infinitary Combinatory Reduction Systems (iCRSs). We prove that all fair, outermost-fair, and needed-fair strategies are normalising for orthogonal, fully-extended iCRSs. These facts properly generalise a number of results on normalising strategies in fi
Maximum entropy models of ecosystem functioning
Bertram, Jason, E-mail: jason.bertram@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)
2014-12-05
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Maximum entropy models of ecosystem functioning
Bertram, Jason
2014-12-01
Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes' broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.
Recommended Maximum Temperature For Mars Returned Samples
Beaty, D. W.; McSween, H. Y.; Czaja, A. D.; Goreva, Y. S.; Hausrath, E.; Herd, C. D. K.; Humayun, M.; McCubbin, F. M.; McLennan, S. M.; Hays, L. E.
2016-01-01
The Returned Sample Science Board (RSSB) was established in 2015 by NASA to provide expertise from the planetary sample community to the Mars 2020 Project. The RSSB's first task was to address the effect of heating during acquisition and storage of samples on scientific investigations that could be expected to be conducted if the samples are returned to Earth. Sample heating may cause changes that could ad-versely affect scientific investigations. Previous studies of temperature requirements for returned mar-tian samples fall within a wide range (-73 to 50 degrees Centigrade) and, for mission concepts that have a life detection component, the recommended threshold was less than or equal to -20 degrees Centigrade. The RSSB was asked by the Mars 2020 project to determine whether or not a temperature requirement was needed within the range of 30 to 70 degrees Centigrade. There are eight expected temperature regimes to which the samples could be exposed, from the moment that they are drilled until they are placed into a temperature-controlled environment on Earth. Two of those - heating during sample acquisition (drilling) and heating while cached on the Martian surface - potentially subject samples to the highest temperatures. The RSSB focused on the upper temperature limit that Mars samples should be allowed to reach. We considered 11 scientific investigations where thermal excursions may have an adverse effect on the science outcome. Those are: (T-1) organic geochemistry, (T-2) stable isotope geochemistry, (T-3) prevention of mineral hydration/dehydration and phase transformation, (T-4) retention of water, (T-5) characterization of amorphous materials, (T-6) putative Martian organisms, (T-7) oxidation/reduction reactions, (T-8) (sup 4) He thermochronometry, (T-9) radiometric dating using fission, cosmic-ray or solar-flare tracks, (T-10) analyses of trapped gasses, and (T-11) magnetic studies.
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Subha, B; Muthukumar, M
2012-01-01
Sago industries effluent containing large amounts of organic content produced excess sludge which is a serious problem in wastewater treatment. In this study ozonation has been employed for the reduction of excess sludge production in activated sludge process. Central composite design is used to study the effect of ozone treatment for the reduction of excess sludge production in sago effluent and to optimise the variables such as pH, ozonation time, and retention time. ANOVA showed that the coefficient determination value (R(2)) of VSS and COD reduction were 0.9689 and 0.8838, respectively. VSS reduction (81%) was achieved at acidic pH 6.9, 12 minutes ozonation, and retention time of 10 days. COD reduction (87%) was achieved at acidic pH 6.7, 8 minutes of ozonation time, and retention time of 6 days. Low ozonation time and high retention time influence maximum sludge reduction, whereas low ozonation time with low retention time was effective for COD reduction.
Eng, T Y; Patel, A J; Ha, C S
2016-01-01
The use of intravaginal Foley balloons in addition to conventional packing during high-dose-rate (HDR) tandem and ovoids intracavitary brachytherapy (ICBT) is a means to improve displacement of organs at risk, thus reducing dose-dependent complications. The goal of this project was to determine the reduction in dose achieved to the bladder and rectum with intravaginal Foley balloons with CT-based planning and to share our packing technique. One hundred and six HDR-ICBT procedures performed for 38 patients were analyzed for this report. An uninflated Foley balloon was inserted into the vagina above and below the tandem flange separately and secured in place with vaginal packing. CT images were then obtained with both inflated and deflated Foley balloons. Plan optimization occurred and dose volume histogram data were generated for the bladder and rectum. Maximum dose to 0.1, 1.0, and 2.0 cm(3) volumes for the rectum and bladder were analyzed and compared between inflated and deflated balloons using parametric statistical analysis. Inflation of intravaginal balloons allowed significant reduction of dose to the bladder and rectum. Amount of reduction was dependent on the anatomy of the patient and the placement of the balloons. Displacement of the organs at risk by the balloons allowed an average of 7.2% reduction in dose to the bladder (D0.1 cm(3)) and 9.3% to the rectum (D0.1 cm(3)) with a maximum reduction of 41% and 43%, respectively. For patients undergoing HDR-ICBT, a significant dose reduction to the bladder and rectum could be achieved with further displacement of these structures using intravaginal Foley balloons in addition to conventional vaginal packing. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Impacts of emission reductions on aerosol radiative effects
J.-P. Pietikäinen
2015-05-01
Full Text Available The global aerosol–climate model ECHAM-HAMMOZ was used to investigate changes in the aerosol burden and aerosol radiative effects in the coming decades. Four different emissions scenarios were applied for 2030 (two of them applied also for 2020 and the results were compared against the reference year 2005. Two of the scenarios are based on current legislation reductions: one shows the maximum potential of reductions that can be achieved by technical measures, and the other is targeted to short-lived climate forcers (SLCFs. We have analyzed the results in terms of global means and additionally focused on eight subregions. Based on our results, aerosol burdens show an overall decreasing trend as they basically follow the changes in primary and precursor emissions. However, in some locations, such as India, the burdens could increase significantly. The declining emissions have an impact on the clear-sky direct aerosol effect (DRE, i.e. the cooling effect. The DRE could decrease globally 0.06–0.4 W m−2 by 2030 with some regional increases, for example, over India (up to 0.84 W m−2. The global changes in the DRE depend on the scenario and are smallest in the targeted SLCF simulation. The aerosol indirect radiative effect could decline 0.25–0.82 W m−2 by 2030. This decrease takes place mostly over the oceans, whereas the DRE changes are greatest over the continents. Our results show that targeted emission reduction measures can be a much better choice for the climate than overall high reductions globally. Our simulations also suggest that more than half of the near-future forcing change is due to the radiative effects associated with aerosol–cloud interactions.
ACHIEVING OPTIMAL SCHOOL CLIMATE
Nizar SHIHADI
2015-11-01
Full Text Available Development of optimal school climate is the basis of educational, social and moral work in school. Optimal educa-tional climate in a school is a condition for learning and development of all those attending the educational establishment (pupils, teachers and parents. The school is responsible for the personal, cognitive, emotional, social and moral develop-ment of pupils. The educational team has the ability and commitment to promote an educational climate. Improvement of study achievements of pupils is related, as well as conditional, to optimal climate. "A climate in an educational establishment is a key factor that affects the creation of environment which develops personal security and sense of affiliation, value and mutual respect" [12].FORMAREA UNEI ATMOSFERE OPTIMALE ÎN ŞCOALĂ Formarea unei atmosfere optimale în şcoala medie este baza lucrului educaţional, social şi moral în şcoală. Atmosfera educaţională în şcoală este o condiţie pentru instruirea şi dezvoltarea tuturor celor înrolaţi în instituţia educaţională (elevi, profesori şi părinţi. Şcoala poartă răspundere de condiţiile favorabile în dezvoltarea personală, cognitivă, emoţională, socială şi morală a elevilor. Echipa de profesori are abilitatea şi angajamentul de a promova condiţii educaţionale favorabile. Îmbunătăţirea realizărilor elevilor la învăţătură este legată şi condiţionată de climatul optim. „Atmosfera în instituţia educaţională este factorul-cheie care afectează crearea unui mediu ce dezvoltă securitatea personală şi sentimentul de afiliere, valoarea şi respectul reciproc" [12].
Lecomte, J.; Juillet, J. J.
2016-12-01
days). During the exploration the Rover will use the TGO-2016 for the communications with Earth. This paper will outline the Exomars 2016 mission design, first in flight achievement and performance results and provide a description of the major design drivers of the 2020 mission, with a view to highlight lessons learnt aspects that must be considered for future mission design.
Attitude Towards Physics and Additional Mathematics Achievement Towards Physics Achievement
Veloo, Arsaythamby; Nor, Rahimah; Khalid, Rozalina
2015-01-01
The purpose of this research is to identify the difference in students' attitude towards Physics and Additional Mathematics achievement based on gender and relationship between attitudinal variables towards Physics and Additional Mathematics achievement with achievement in Physics. This research focused on six variables, which is attitude towards…
Climate Leadership Award for Excellence in GHG Management (Goal Achievement Award)
Apply to the Climate Leadership Award for Excellence in GHG Management (Goal Achievement Award), which publicly recognizes organizations that achieve publicly-set aggressive greenhouse gas emissions reduction goals.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Rosaler, Joshua
2015-05-01
A conventional wisdom about the progress of physics holds that successive theories wholly encompass the domains of their predecessors through a process that is often called "reduction." While certain influential accounts of inter-theory reduction in physics take reduction to require a single "global" derivation of one theory's laws from those of another, I show that global reductions are not available in all cases where the conventional wisdom requires reduction to hold. However, I argue that a weaker "local" form of reduction, which defines reduction between theories in terms of a more fundamental notion of reduction between models of a single fixed system, is available in such cases and moreover suffices to uphold the conventional wisdom. To illustrate the sort of fixed-system, inter-model reduction that grounds inter-theoretic reduction on this picture, I specialize to a particular class of cases in which both models are dynamical systems. I show that reduction in these cases is underwritten by a mathematical relationship that follows a certain liberalized construal of Nagel/Schaffner reduction, and support this claim with several examples. Moreover, I show that this broadly Nagelian analysis of inter-model reduction encompasses several cases that are sometimes cited as instances of the "physicist's" limit-based notion of reduction.
Dimova, Slobodanka; Jensen, Christian
2013-01-01
/video recorded speech samples and written reports produced by two experienced raters after testing. Our findings suggest that reduction or reduction-like pronunciation features are found in tested L2 speech, but whenever raters identify and comment on such reductions, they tend to assess reductions negatively......This study represents an initial exploration of raters' comments and actual realisations of form reductions in L2 test speech performances. Performances of three L2 speakers were selected as case studies and illustrations of how reductions are evaluated by the raters. The analysis is based on audio...
Ultrasound-Assisted Distal Radius Fracture Reduction
Socransky, Steve; Skinner, Andrew; Bromley, Mark; Smith, Andrew; Anawati, Alexandre; Middaugh, Jeff; Ross, Peter
2016-01-01
Introduction Closed reduction of distal radius fractures (CRDRF) is a commonly performed emergency department (ED) procedure. The use of point-of-care ultrasound (PoCUS) to diagnose fractures and guide reduction has previously been described. The primary objective of this study was to determine if the addition of PoCUS to CRDRF changed the perception of successful initial reduction. This was measured by the rate of further reduction attempts based on PoCUS following the initial clinical determination of achievement of best possible reduction. Methods We performed a multicenter prospective cohort study, using a convenience sample of adult ED patients presenting with a distal radius fracture to five Canadian EDs. All study physicians underwent standardized PoCUS training for fractures. Standard clinically-guided best possible fracture reduction was initially performed. PoCUS was then used to assess the reduction adequacy. Repeat reduction was performed if deemed indicated. A post-reduction radiograph was then performed. Clinician impression of reduction adequacy was scored on a 5 point Likert scale following the initial clinically-guided reduction and following each PoCUS scan and the post-reduction radiograph. Results There were 131 patients with 132 distal radius fractures. Twelve cases were excluded prior to analysis. There was no significant difference in the assessment of the initial reduction status by PoCUS as compared to the clinical exam (mean score: 3.8 vs. 3.9; p = 0.370; OR 0.89; 95% CI 0.46 to 1.72; p = 0.87). Significantly fewer cases fell into the uncertain category with PoCUS than with clinical assessment (2 vs 12; p = 0.008). Repeat reduction was performed in 49 patients (41.2%). Repeat reduction led to a significant improvement (p < 0.001) in the PoCUS determined adequacy of reduction (mean score: 4.3 vs 3.1; p < 0.001). In this group, the odds ratio for adequate vs. uncertain or inadequate reduction assessment using PoCUS was 12.5 (95% CI 3
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
Nicodem James Govella
2012-06-01
Full Text Available By definition, elimination of malaria means permanent reduction to zero of locally incidence of infections. Achieving this goal among other reasons, it requires fully understanding on where and when persons are most exposed to malaria vectors as these are fundamental for targeting interventions to achieve maximum impact. While elimination can be possible in some settings with low malaria transmission intensity and dominated with late and indoor biting of vectors using Long Lasting Insecticidal Nets (LLIN and Indoor Residual Spraying (IRs, it’s difficult and even impossible in areas with high and where majority of human exposure to transmission occurs outside human dwellings. Recently in response to wide spread use of LLIN and IRS, human risk of exposure to transmission is increasingly spread across the entire night so that much of it occurs outdoors and before bed time. This modification of vector populations and behaviour has now been reported from across Africa, Asia and from the Solomon Islands. Historical evidence shows that even in areas with intervention coverage exceeding 90% of human population it was so hard to even push prevalence down below the pre elimination threshold of 1% being compromised mainly with the outdoor residual transmission. Malaria control experts must however continue to deliver interventions that tackle indoor transmission but considerable amount of resources that target mosquitoes outside of houses and outside of sleeping hours will therefore be required to sustain and go beyond existing levels of malaria control and achieve elimination.
Investigation of Maximum Power Point Tracking for Thermoelectric Generators
Phillip, Navneesh; Maganga, Othman; Burnham, Keith J.; Ellis, Mark A.; Robinson, Simon; Dunn, Julian; Rouaud, Cedric
2013-07-01
In this paper, a thermoelectric generator (TEG) model is developed as a tool for investigating optimized maximum power point tracking (MPPT) algorithms for TEG systems within automotive exhaust heat energy recovery applications. The model comprises three main subsystems that make up the TEG system: the heat exchanger, thermoelectric material, and power conditioning unit (PCU). In this study, two MPPT algorithms known as the perturb and observe (P&O) algorithm and extremum seeking control (ESC) are investigated. A synchronous buck-boost converter is implemented as the preferred DC-DC converter topology, and together with the MPPT algorithm completes the PCU architecture. The process of developing the subsystems is discussed, and the advantage of using the MPPT controller is demonstrated. The simulation results demonstrate that the ESC algorithm implemented in combination with a synchronous buck-boost converter achieves favorable power outputs for TEG systems. The appropriateness is by virtue of greater responsiveness to changes in the system's thermal conditions and hence the electrical potential difference generated in comparison with the P&O algorithm. The MATLAB/Simulink environment is used for simulation of the TEG system and comparison of the investigated control strategies.
Maximum covariance analysis to identify intraseasonal oscillations over tropical Brazil
Barreto, Naurinete J. C.; Mesquita, Michel d. S.; Mendes, David; Spyrides, Maria H. C.; Pedra, George U.; Lucio, Paulo S.
2017-09-01
A reliable prognosis of extreme precipitation events in the tropics is arguably challenging to obtain due to the interaction of meteorological systems at various time scales. A pivotal component of the global climate variability is the so-called intraseasonal oscillations, phenomena that occur between 20 and 100 days. The Madden-Julian Oscillation (MJO), which is directly related to the modulation of convective precipitation in the equatorial belt, is considered the primary oscillation in the tropical region. The aim of this study is to diagnose the connection between the MJO signal and the regional intraseasonal rainfall variability over tropical Brazil. This is achieved through the development of an index called Multivariate Intraseasonal Index for Tropical Brazil (MITB). This index is based on Maximum Covariance Analysis (MCA) applied to the filtered daily anomalies of rainfall data over tropical Brazil against a group of covariates consisting of: outgoing longwave radiation and the zonal component u of the wind at 850 and 200 hPa. The first two MCA modes, which were used to create the { MITB}_1 and { MITB}_2 indices, represent 65 and 16 % of the explained variance, respectively. The combined multivariate index was able to satisfactorily represent the pattern of intraseasonal variability over tropical Brazil, showing that there are periods of activation and inhibition of precipitation connected with the pattern of MJO propagation. The MITB index could potentially be used as a diagnostic tool for intraseasonal forecasting.
Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs
Nix, D.A.; Hogden, J.E.
1998-12-01
The authors describe Maximum-Likelihood Continuity Mapping (MALCOM) as an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete ''hidden'' space constrained by a fixed finite-automata architecture, MALCOM has a continuous hidden space (a continuity map) that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a far more realistic model of the speech production process. The authors support this claim by generating continuity maps for three speakers and using the resulting MALCOM paths to predict measured speech articulator data. The correlations between the MALCOM paths (obtained from only the speech acoustics) and the actual articulator movements average 0.77 on an independent test set not used to train MALCOM nor the predictor. On average, this unsupervised model achieves 92% of performance obtained using the corresponding supervised method.
Feedback Limits to Maximum Seed Masses of Black Holes
Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea
2017-02-01
The most massive black holes observed in the universe weigh up to ∼1010 M ⊙, nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds (M • ≳ 104 M ⊙) hosted in small isolated halos (M h ≲ 109 M ⊙) accreting with relatively small radiative efficiencies (ɛ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M •–σ relation observed at z ∼ 0 cannot be established in isolated halos at high-z, but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 104–6 M ⊙, we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.
Approximate Maximum Likelihood Commercial Bank Loan Management Model
Godwin N.O. Asemota
2009-01-01
Full Text Available Problem statement: Loan management is a very complex and yet, a vitally important aspect of any commercial bank operations. The balance sheet position shows the main sources of funds as deposits and shareholders contributions. Approach: In order to operate profitably, remain solvent and consequently grow, a commercial bank needs to properly manage its excess cash to yield returns in the form of loans. Results: The above are achieved if the bank can honor depositors withdrawals at all times and also grant loans to credible borrowers. This is so because loans are the main portfolios of a commercial bank that yield the highest rate of returns. Commercial banks and the environment in which they operate are dynamic. So, any attempt to model their behavior without including some elements of uncertainty would be less than desirable. The inclusion of uncertainty factor is now possible with the advent of stochastic optimal control theories. Thus, approximate maximum likelihood algorithm with variable forgetting factor was used to model the loan management behavior of a commercial bank in this study. Conclusion: The results showed that uncertainty factor employed in the stochastic modeling, enable us to adaptively control loan demand as well as fluctuating cash balances in the bank. However, this loan model can also visually aid commercial bank managers planning decisions by allowing them to competently determine excess cash and invest this excess cash as loans to earn more assets without jeopardizing public confidence.
Maximum length sequence and Bessel diffusers using active technologies
Cox, Trevor J.; Avis, Mark R.; Xiao, Lejun
2006-02-01
Active technologies can enable room acoustic diffusers to operate over a wider bandwidth than passive devices, by extending the bass response. Active impedance control can be used to generate surface impedance distributions which cause wavefront dispersion, as opposed to the more normal absorptive or pressure-cancelling target functions. This paper details the development of two new types of active diffusers which are difficult, if not impossible, to make as passive wide-band structures. The first type is a maximum length sequence diffuser where the well depths are designed to be frequency dependent to avoid the critical frequencies present in the passive device, and so achieve performance over a finite-bandwidth. The second is a Bessel diffuser, which exploits concepts developed for transducer arrays to form a hybrid absorber-diffuser. Details of the designs are given, and measurements of scattering and impedance used to show that the active diffusers are operating correctly over a bandwidth of about 100 Hz to 1.1 kHz. Boundary element method simulation is used to show how more application-realistic arrays of these devices would behave.
Maximum-likelihood estimation of circle parameters via convolution.
Zelniker, Emanuel E; Clarkson, I Vaughan L
2006-04-01
The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images.
Two-Agent Scheduling to Minimize the Maximum Cost with Position-Dependent Jobs
Long Wan
2015-01-01
Full Text Available This paper investigates a single-machine two-agent scheduling problem to minimize the maximum costs with position-dependent jobs. There are two agents, each with a set of independent jobs, competing to perform their jobs on a common machine. In our scheduling setting, the actual position-dependent processing time of one job is characterized by variable function dependent on the position of the job in the sequence. Each agent wants to fulfil the objective of minimizing the maximum cost of its own jobs. We develop a feasible method to achieve all the Pareto optimal points in polynomial time.
Air conditioners and summer maximum electricity consumption in the PD ED Belgrade
Vrcelj Nada
2011-01-01
Full Text Available The paper presents an analysis of the impact of consumption of air conditioning in the form of daily consumption diagram as well as their impact on the achieved power consumption maximum during the summer period. Three cases were observed, regarding 10 kV cables, that supply with electricity mainly: the commercial sector, residential area that does not use electricity for heating and residential area that uses electricity for winter heating. At the same time the winter maximum in the each of the observed cases, as well as the possibility of exceeding the allowable current loads on the routes of monitored 10 kV cables are analyzed.
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
20 CFR 228.23 - Priority of reductions.
2010-04-01
... SURVIVOR ANNUITIES The Tier I Annuity Component § 228.23 Priority of reductions. The tier I component of the survivor annuity is first reduced by the family maximum, if applicable, then any applicable age reduction, then by any public pension offset, then by any social security benefit payable, then by the...
B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy
Shilpa Dilipkumar
2015-03-01
Full Text Available An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy. A comparative study of the proposed technique with the state-of-art maximum likelihood (ML and maximum-a-posteriori (MAP with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
Vitaliy V. Chaban
2014-12-01
Full Text Available This research is devoted to analyzing the dynamic loads generated in the glove machine at reciprocating motion of knitting and intermediate carriages. Proposed is a method for determining the maximum dynamic loads in the glove machine carriages’ drive. It is noted that the dynamic loads reduction can be achieved by equipping the drive with energy accumulation and compensation units, in which quality it is expedient to use the cylindrical compression springs. The obtained dependence allows to determine the necessary stiffness of compression springs (energy accumulating and compensating units, at which the dynamic loads due to the carriages masses’ inertia can be almost completely eliminated.
Simakov, Evgenya I.; Kurennoy, Sergey S.; O'Hara, James F.; Olivas, Eric R.; Shchegolkov, Dmitry Yu.
2014-02-01
We present a design of a superconducting rf photonic band gap (SRF PBG) accelerator cell with specially shaped rods in order to reduce peak surface magnetic fields and improve the effectiveness of the PBG structure for suppression of higher order modes (HOMs). The ability of PBG structures to suppress long-range wakefields is especially beneficial for superconducting electron accelerators for high power free-electron lasers (FELs), which are designed to provide high current continuous duty electron beams. Using PBG structures to reduce the prominent beam-breakup phenomena due to HOMs will allow significantly increased beam-breakup thresholds. As a result, there will be possibilities for increasing the operation frequency of SRF accelerators and for the development of novel compact high-current accelerator modules for the FELs.
Maximum detection range limitation of pulse laser radar with Geiger-mode avalanche photodiode array
Luo, Hanjun; Xu, Benlian; Xu, Huigang; Chen, Jingbo; Fu, Yadan
2015-05-01
When designing and evaluating the performance of laser radar system, maximum detection range achievable is an essential parameter. The purpose of this paper is to propose a theoretical model of maximum detection range for simulating the Geiger-mode laser radar's ranging performance. Based on the laser radar equation and the requirement of the minimum acceptable detection probability, and assuming the primary electrons triggered by the echo photons obey Poisson statistics, the maximum range theoretical model is established. By using the system design parameters, the influence of five main factors, namely emitted pulse energy, noise, echo position, atmospheric attenuation coefficient, and target reflectivity on the maximum detection range are investigated. The results show that stronger emitted pulse energy, lower noise level, more front echo position in the range gate, higher atmospheric attenuation coefficient, and higher target reflectivity can result in greater maximum detection range. It is also shown that it's important to select the minimum acceptable detection probability, which is equivalent to the system signal-to-noise ratio for producing greater maximum detection range and lower false-alarm probability.
Understanding the Role of Reservoir Size on Probable Maximum Precipitation
Woldemichael, A. T.; Hossain, F.
2011-12-01
This study addresses the question 'Does surface area of an artificial reservoir matter in the estimation of probable maximum precipitation (PMP) for an impounded basin?' The motivation of the study was based on the notion that the stationarity assumption that is implicit in the PMP for dam design can be undermined in the post-dam era due to an enhancement of extreme precipitation patterns by an artificial reservoir. In addition, the study lays the foundation for use of regional atmospheric models as one way to perform life cycle assessment for planned or existing dams to formulate best management practices. The American River Watershed (ARW) with the Folsom dam at the confluence of the American River was selected as the study region and the Dec-Jan 1996-97 storm event was selected for the study period. The numerical atmospheric model used for the study was the Regional Atmospheric Modeling System (RAMS). First, the numerical modeling system, RAMS, was calibrated and validated with selected station and spatially interpolated precipitation data. Best combinations of parameterization schemes in RAMS were accordingly selected. Second, to mimic the standard method of PMP estimation by moisture maximization technique, relative humidity terms in the model were raised to 100% from ground up to the 500mb level. The obtained model-based maximum 72-hr precipitation values were named extreme precipitation (EP) as a distinction from the PMPs obtained by the standard methods. Third, six hypothetical reservoir size scenarios ranging from no-dam (all-dry) to the reservoir submerging half of basin were established to test the influence of reservoir size variation on EP. For the case of the ARW, our study clearly demonstrated that the assumption of stationarity that is implicit the traditional estimation of PMP can be rendered invalid to a large part due to the very presence of the artificial reservoir. Cloud tracking procedures performed on the basin also give indication of the
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
Andersson, Pher G
2008-01-01
With its comprehensive overview of modern reduction methods, this book features high quality contributions allowing readers to find reliable solutions quickly and easily. The monograph treats the reduction of carbonyles, alkenes, imines and alkynes, as well as reductive aminations and cross and heck couplings, before finishing off with sections on kinetic resolutions and hydrogenolysis. An indispensable lab companion for every chemist.
Designing Tone Reservation PAR Reduction
Johansson Albin
2006-01-01
Full Text Available Tone reservation peak-to-average (PAR ratio reduction is an established area when it comes to bringing down signal peaks in multicarrier (DMT or OFDM systems. When designing such a system, some questions often arise about PAR reduction. Is it worth the effort? How much can it give? How much does it give depending on the parameter choices? With this paper, we attempt to answer these questions without resolving to extensive simulations for every system and every parameter choice. From a specification of the allowed spectrum, for instance prescribed by a standard, including a PSD-mask and a number of tones, we analytically predict achievable PAR levels, and thus implicitly suggest parameter choices. We use the ADSL2 and ADSL2+ systems as design examples.
Wind load reduction for heliostats
Peterka, J.A.; Hosoya, N.; Bienkiewicz, B.; Cermak, J.E.
1986-05-01
This report presents the results of wind-tunnel tests supported through the Solar Energy Research Institute (SERI) by the Office of Solar Thermal Technology of the US Department of Energy as part of the SERI research effort on innovative concentrators. As gravity loads on drive mechanisms are reduced through stretched-membrane technology, the wind-load contribution of the required drive capacity increases in percentage. Reduction of wind loads can provide economy in support structure and heliostat drive. Wind-tunnel tests have been directed at finding methods to reduce wind loads on heliostats. The tests investigated primarily the mean forces, moments, and the possibility of measuring fluctuating forces in anticipation of reducing those forces. A significant increase in ability to predict heliostat wind loads and their reduction within a heliostat field was achieved.
Sustained Low Temperature NOx Reduction
Zha, Yuhui
2017-04-05
Increasing regulatory, environmental, and customer pressure in recent years led to substantial improvements in the fuel efficiency of diesel engines, including the remarkable breakthroughs demonstrated through the Super Truck program supported by the U.S. Department of Energy (DOE). On the other hand, these improvements have translated into a reduction of exhaust gas temperatures, thus further complicating the task of controlling NOx emissions, especially in low power duty cycles. The need for improved NOx conversion over these low temperature duty cycles is also observed as requirements tighten with in-use emissions testing. Sustained NOx reduction at low temperatures, especially in the 150-200oC range, shares some similarities with the more commonly discussed cold-start challenge, however poses a number of additional and distinct technical problems. In this project we set a bold target of achieving and maintaining a 90% NOx conversion at the SCR catalyst inlet temperature of 150oC. The project is intended to push the boundaries of the existing technologies, while staying within the realm of realistic future practical implementation. In order to meet the resulting challenges at the levels of catalyst fundamentals, system components, and system integration, Cummins has partnered with the DOE, Johnson Matthey, and Pacific Northwest National Lab and initiated the Sustained Low-Temperature NOx Reduction program at the beginning of 2015. Through this collaboration, we are exploring catalyst formulations and catalyst architectures with enhanced catalytic activity at 150°C; opportunities to approach the desirable ratio of NO and NO2 in the SCR feed gas; options for robust low-temperature reductant delivery; and the requirements for overall system integration. The program is expected to deliver an on-engine demonstration of the technical solution and an assessment of its commercial potential. In the SAE meeting, we will share the initial performance data on engine to
LANDFILL OPERATION FOR CARBON SEQUESTRATION AND MAXIMUM METHANE EMISSION CONTROL
Don Augenstein; Ramin Yazdani; Rick Moore; Michelle Byars; Jeff Kieffer; Professor Morton Barlaz; Rinav Mehta
2000-02-26
Controlled landfilling is an approach to manage solid waste landfills, so as to rapidly complete methane generation, while maximizing gas capture and minimizing the usual emissions of methane to the atmosphere. With controlled landfilling, methane generation is accelerated to more rapid and earlier completion to full potential by improving conditions (principally moisture, but also temperature) to optimize biological processes occurring within the landfill. Gas is contained through use of surface membrane cover. Gas is captured via porous layers, under the cover, operated at slight vacuum. A field demonstration project has been ongoing under NETL sponsorship for the past several years near Davis, CA. Results have been extremely encouraging. Two major benefits of the technology are reduction of landfill methane emissions to minuscule levels, and the recovery of greater amounts of landfill methane energy in much shorter times, more predictably, than with conventional landfill practice. With the large amount of US landfill methane generated, and greenhouse potency of methane, better landfill methane control can play a substantial role both in reduction of US greenhouse gas emissions and in US renewable energy. The work described in this report, to demonstrate and advance this technology, has used two demonstration-scale cells of size (8000 metric tons [tonnes]), sufficient to replicate many heat and compaction characteristics of larger ''full-scale'' landfills. An enhanced demonstration cell has received moisture supplementation to field capacity. This is the maximum moisture waste can hold while still limiting liquid drainage rate to minimal and safely manageable levels. The enhanced landfill module was compared to a parallel control landfill module receiving no moisture additions. Gas recovery has continued for a period of over 4 years. It is quite encouraging that the enhanced cell methane recovery has been close to 10-fold that experienced with
Climate Change and Poverty Reduction
Anderson, Simon
2011-08-15
Climate change will make it increasingly difficult to achieve and sustain development goals. This is largely because climate effects on poverty remain poorly understood, and poverty reduction strategies do not adequately support climate resilience. Ensuring effective development in the face of climate change requires action on six fronts: investing in a stronger climate and poverty evidence base; applying the learning about development effectiveness to how we address adaptation needs; supporting nationally derived, integrated policies and programmes; including the climate-vulnerable poor in developing strategies; and identifying how mitigation strategies can also reduce poverty and enable adaptation.
Seong Hyeon Park
2015-02-01
Full Text Available The drag reducing efficiency of the outer-layer vertical blades, which were first devised by Hutchins (2003, have been demonstrated by the recent towing tank measurements. From the drag measurement of flat plate with various vertical blades arrays by Park et al. (2011, a maximum 9.6% of reduction of total drag was achieved. The scale of blade geometry is found to be weakly correlated with outer variable of boundary layer. The drag reduction of 2.8% has been also confirmed by the model ship test by An et al. (2014. With a view to enabling the identification of drag reduction mechanism of the outer-layer vertical blades, detailed flow field measurements have been performed using 2D time resolved PIV in this study. It is found that the skin friction reduction effect is varied according to the spanwise position, with 2.73% and 7.95% drag reduction in the blade plane and the blade-in-between plane, respectively. The influence of vertical blades array upon the characteristics of the turbulent coherent structures was analyzed by POD method. It is observed that the vortical structures are cut and deformed by blades array and the skin frictional reduction is closely associated with the subsequent evolution of turbulent structures.
Leaf Dynamics of Panicum maximum under Future Climatic Changes.
Carlos Henrique Britto de Assis Prado
Full Text Available Panicum maximum Jacq. 'Mombaça' (C4 was grown in field conditions with sufficient water and nutrients to examine the effects of warming and elevated CO2 concentrations during the winter. Plants were exposed to either the ambient temperature and regular atmospheric CO2 (Control; elevated CO2 (600 ppm, eC; canopy warming (+2°C above regular canopy temperature, eT; or elevated CO2 and canopy warming (eC+eT. The temperatures and CO2 in the field were controlled by temperature free-air controlled enhancement (T-FACE and mini free-air CO2 enrichment (miniFACE facilities. The most green, expanding, and expanded leaves and the highest leaf appearance rate (LAR, leaves day(-1 and leaf elongation rate (LER, cm day(-1 were observed under eT. Leaf area and leaf biomass were higher in the eT and eC+eT treatments. The higher LER and LAR without significant differences in the number of senescent leaves could explain why tillers had higher foliage area and leaf biomass in the eT treatment. The eC treatment had the lowest LER and the fewest expanded and green leaves, similar to Control. The inhibitory effect of eC on foliage development in winter was indicated by the fewer green, expanded, and expanding leaves under eC+eT than eT. The stimulatory and inhibitory effects of the eT and eC treatments, respectively, on foliage raised and lowered, respectively, the foliar nitrogen concentration. The inhibition of foliage by eC was confirmed by the eC treatment having the lowest leaf/stem biomass ratio and by the change in leaf biomass-area relationships from linear or exponential growth to rectangular hyperbolic growth under eC. Besides, eC+eT had a synergist effect, speeding up leaf maturation. Therefore, with sufficient water and nutrients in winter, the inhibitory effect of elevated CO2 on foliage could be partially offset by elevated temperatures and relatively high P. maximum foliage production could be achieved under future climatic change.
Leaf Dynamics of Panicum maximum under Future Climatic Changes.
Britto de Assis Prado, Carlos Henrique; Haik Guedes de Camargo-Bortolin, Lívia; Castro, Érique; Martinez, Carlos Alberto
2016-01-01
Panicum maximum Jacq. 'Mombaça' (C4) was grown in field conditions with sufficient water and nutrients to examine the effects of warming and elevated CO2 concentrations during the winter. Plants were exposed to either the ambient temperature and regular atmospheric CO2 (Control); elevated CO2 (600 ppm, eC); canopy warming (+2°C above regular canopy temperature, eT); or elevated CO2 and canopy warming (eC+eT). The temperatures and CO2 in the field were controlled by temperature free-air controlled enhancement (T-FACE) and mini free-air CO2 enrichment (miniFACE) facilities. The most green, expanding, and expanded leaves and the highest leaf appearance rate (LAR, leaves day(-1)) and leaf elongation rate (LER, cm day(-1)) were observed under eT. Leaf area and leaf biomass were higher in the eT and eC+eT treatments. The higher LER and LAR without significant differences in the number of senescent leaves could explain why tillers had higher foliage area and leaf biomass in the eT treatment. The eC treatment had the lowest LER and the fewest expanded and green leaves, similar to Control. The inhibitory effect of eC on foliage development in winter was indicated by the fewer green, expanded, and expanding leaves under eC+eT than eT. The stimulatory and inhibitory effects of the eT and eC treatments, respectively, on foliage raised and lowered, respectively, the foliar nitrogen concentration. The inhibition of foliage by eC was confirmed by the eC treatment having the lowest leaf/stem biomass ratio and by the change in leaf biomass-area relationships from linear or exponential growth to rectangular hyperbolic growth under eC. Besides, eC+eT had a synergist effect, speeding up leaf maturation. Therefore, with sufficient water and nutrients in winter, the inhibitory effect of elevated CO2 on foliage could be partially offset by elevated temperatures and relatively high P. maximum foliage production could be achieved under future climatic change.
Present and Last Glacial Maximum climates as states of maximum entropy production
Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere
2011-01-01
The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...
Hekmati, Arsalan; Hekmati, Rasoul
2016-12-01
Electrical power quality and stability is an important issue nowadays and technology of Superconducting Magnetic Energy Storage systems, SMES, has brought real power storage capability to power systems. Therefore, optimum SMES design to achieve maximum energy with the least length of tape has been quite a matter of concern. This paper provides an approach to design optimization of solenoid and toroid types of SMES, ensuring maximum possible energy storage. The optimization process, based on Genetic Algorithm, calculates the operating current of superconducting tapes through intersection of a load line with the surface indicating the critical current variation versus the parallel and perpendicular components of magnetic flux density. FLUX3D simulations of SMES have been utilized for energy calculations. Through numerical analysis of obtained data, formulations have been obtained for the optimum dimensions of superconductor coil and maximum stored energy for a given length and cross sectional area of superconductor tape.
A MAXIMUM ENTROPY CHUNKING MODEL WITH N-FOLD TEMPLATE CORRECTION
无
2007-01-01
This letter presents a new chunking method based on Maximum Entropy (ME) model with N-fold template correction model. First two types of machine learning models are described. Based on the analysis of the two models, then the chunking model which combines the profits of conditional probability model and rule based model is proposed. The selection of features and rule templates in the chunking model is discussed. Experimental results for the CoNLL-2000 corpus show that this approach achieves impressive accuracy in terms of the F-score: 92.93%. Compared with the ME model and ME Markov model, the new chunking model achieves better performance.
The Predictiveness of Achievement Goals
Huy P. Phan
2013-11-01
Full Text Available Using the Revised Achievement Goal Questionnaire (AGQ-R (Elliot & Murayama, 2008, we explored first-year university students’ achievement goal orientations on the premise of the 2 × 2 model. Similar to recent studies (Elliot & Murayama, 2008; Elliot & Thrash, 2010, we conceptualized a model that included both antecedent (i.e., enactive learning experience and consequence (i.e., intrinsic motivation and academic achievement of achievement goals. Two hundred seventy-seven university students (151 women, 126 men participated in the study. Structural equation modeling procedures yielded evidence that showed the predictive effects of enactive learning experience and mastery goals on intrinsic motivation. Academic achievement was influenced intrinsic motivation, performance-approach goals, and enactive learning experience. Enactive learning experience also served as an antecedent of the four achievement goal types. On the whole, evidence obtained supports the AGQ-R and contributes, theoretically, to 2 × 2 model.
Mathematics Achievement in High- and Low-Achieving Secondary Schools
Mohammadpour, Ebrahim; Shekarchizadeh, Ahmadreza
2015-01-01
This paper identifies the amount of variance in mathematics achievement in high- and low-achieving schools that can be explained by school-level factors, while controlling for student-level factors. The data were obtained from 2679 Iranian eighth graders who participated in the 2007 Trends in International Mathematics and Science Study. Of the…
Banakar, V.K.
A historic decision was taken by the Preparatory Commission of the International Seabed Authority (PRE-PCOM) on 17 th August 1987 It was decided to allocate to India exclusive rights for the exploration of polymetallic nodules in an area of about...
Design Methodology for a Maximum Sequence Length MASH Digital Delta-Sigma Modulator
Tao Xu; Marissa Condon
2009-01-01
The paper proposes a novel structure for a MASH digital delta-sigma modulator (DDSM) in order to achieve a long sequence length. The expression for the sequence length is derived. The condition to produce the maximum sequence length is also stated. It is proved that the modulator output only depends on the structure of the first-order error feedback modulator (EFM1) which is the first stage of a Multi-stAge noise SHaping (MASH) modulator.
Magnard, Christophe; Small, David; Meier, Erich
2015-01-01
The phase estimation of cross-track multibaseline synthetic aperture interferometric data is usually thought to be very efficiently achieved using the maximum likelihood (ML) method. The suitability of this method is investigated here as applied to airborne single pass multibaseline data. Experimental interferometric data acquired with a Ka-band sensor were processed using (a) a ML method that fuses the complex data from all receivers and (b) a coarse-to-fine method that only uses the interme...
Wang Guanghua [Ministry of Education Key Laboratory of Pollution Control and Ecological Remediation for Industrial Agglomeration area, College of Environmental Science and Engineering, South China University of Technology, Guangzhou, 510006 (China); Guangzhou municipal engineering design and research institute, Guangzhou, 510060 (China); Sui Jun [Guangzhou municipal engineering design and research institute, Guangzhou, 510060 (China); Shen Huishan; Liang Shukun [Ministry of Education Key Laboratory of Pollution Control and Ecological Remediation for Industrial Agglomeration area, College of Environmental Science and Engineering, South China University of Technology, Guangzhou, 510006 (China); He Xiangming; Zhang Minju; Xie Yizhong; Li Lingyun [Nanhai Limited Liability Development Company, Foshan, 528200 (China); Hu Yongyou, E-mail: ppyyhu@scut.edu.cn [Ministry of Education Key Laboratory of Pollution Control and Ecological Remediation for Industrial Agglomeration area, College of Environmental Science and Engineering, South China University of Technology, Guangzhou, 510006 (China) and State Key Lab of Pulp and Paper Engineering, College of Light Industry and Food Science, South China University of Technology; Guangzhou, 510640 (China)
2011-08-15
In this study, chlorine dioxide (ClO{sub 2}) instead of chlorine (Cl{sub 2}) was proposed to minimize the formation of chlorine-based by-products and was incorporated into a sequencing batch reactor (SBR) for excess sludge reduction. The results showed that the sludge disintegrability of ClO{sub 2} was excellent. The waste activated sludge at an initial concentration of 15 g MLSS/L was rapidly reduced by 36% using ClO{sub 2} doses of 10 mg ClO{sub 2}/g dry sludge which was much lower than that obtained using Cl{sub 2} based on similar sludge reduction efficiency. Maximum sludge disintegration was achieved at 10 mg ClO{sub 2}/g dry sludge for 40 min. ClO{sub 2} oxidation can be successfully incorporated into a SBR for excess sludge reduction without significantly harming the bioreactor performance. The incorporation of ClO{sub 2} oxidation resulted in a 58% reduction in excess sludge production, and the quality of the effluent was not significantly affected.
WANG Zhi-hua; ZHOU Jun-hu; ZHANG Yan-wei; LU Zhi-min; FAN Jian-ren; CEN Ke-fa
2005-01-01
Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15%～25% reburn heat input, temperature range from 1100 ℃ to 1400 ℃ and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 ℃ and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 ℃～1100 ℃. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NOx Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures.
Harm Reduction as “Continuum Care” in Alcohol Abuse Disorder
Icro Maremmani
2015-11-01
Full Text Available Alcohol abuse is one of the most important risk factors for health and is a major cause of death and morbidity. Despite this, only about one-tenth of individuals with alcohol abuse disorders receive therapeutic intervention and specific rehabilitation. Among the various dichotomies that limit an effective approach to the problem of alcohol use disorder treatment, one of the most prominent is integrated treatment versus harm reduction. For years, these two divergent strategies have been considered to be opposite poles of different philosophies of intervention. One is bound to the search for methods that aim to lead the subject to complete abstinence; the other prioritizes a progressive decline in substance use, with maximum reduction in the damage that is correlated with curtailing that use. Reduction of alcohol intake does not require any particular setting, but does require close collaboration between the general practitioner, specialized services for addiction, alcohology services and psychiatry. In patients who reach that target, significant savings in terms of health and social costs can be achieved. Harm reduction is a desirable target, even from an economic point of view. At the present state of neuroscientific knowledge, it is possible to go one step further in the logic that led to the integration of psychosocial and pharmacological approaches, by attempting to remove the shadows of social judgment that, at present, are aiming for a course of treatment that is directed towards absolute abstention.
Harm Reduction as “Continuum Care” in Alcohol Abuse Disorder
Maremmani, Icro; Cibin, Mauro; Pani, Pier Paolo; Rossi, Alessandro; Turchetti, Giuseppe
2015-01-01
Alcohol abuse is one of the most important risk factors for health and is a major cause of death and morbidity. Despite this, only about one-tenth of individuals with alcohol abuse disorders receive therapeutic intervention and specific rehabilitation. Among the various dichotomies that limit an effective approach to the problem of alcohol use disorder treatment, one of the most prominent is integrated treatment versus harm reduction. For years, these two divergent strategies have been considered to be opposite poles of different philosophies of intervention. One is bound to the search for methods that aim to lead the subject to complete abstinence; the other prioritizes a progressive decline in substance use, with maximum reduction in the damage that is correlated with curtailing that use. Reduction of alcohol intake does not require any particular setting, but does require close collaboration between the general practitioner, specialized services for addiction, alcohology services and psychiatry. In patients who reach that target, significant savings in terms of health and social costs can be achieved. Harm reduction is a desirable target, even from an economic point of view. At the present state of neuroscientific knowledge, it is possible to go one step further in the logic that led to the integration of psychosocial and pharmacological approaches, by attempting to remove the shadows of social judgment that, at present, are aiming for a course of treatment that is directed towards absolute abstention. PMID:26610535
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
An Interval Maximum Entropy Method for Quadratic Programming Problem
RUI Wen-juan; CAO De-xin; SONG Xie-wu
2005-01-01
With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.
无
2002-01-01
A symplectic reduction method for symplectic G-spaces is given in this paper without using the existence of momentum mappings.By a method similar to the above one,the arthors give a symplectic reduction method for the Poisson action of Poisson Lie groups on symplectic manifolds,also without using the existence of momentum mappings.The symplectic reduction method for momentum mappings is thus a special case of the above results.
Integer Programming Model for Maximum Clique in Graph
YUAN Xi-bo; YANG You; ZENG Xin-hai
2005-01-01
The maximum clique or maximum independent set of graph is a classical problem in graph theory. Combined with Boolean algebra and integer programming, two integer programming models for maximum clique problem,which improve the old results were designed in this paper. Then, the programming model for maximum independent set is a corollary of the main results. These two models can be easily applied to computer algorithm and software, and suitable for graphs of any scale. Finally the models are presented as Lingo algorithms, verified and compared by several examples.
Counterexamples to convergence theorem of maximum-entropy clustering algorithm
于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生
2003-01-01
In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.
Sociocultural Origins of Achievement Motivation
Maehr, Martin L.
1977-01-01
Presents a theoretical review of work on sociocultural influences on achievement, focusing on a critical evaluation of the work of David McClellan. Offers an alternative conception of achievement motivation which stresses the role of contextual and situational factors in addition to personality factors. Available from: Transaction Periodicals…
Healthy Eating and Academic Achievement
2014-12-09
This podcast highlights the evidence that supports the link between healthy eating and improved academic achievement. It also identifies a few actions to support a healthy school nutrition environment to improve academic achievement. Created: 12/9/2014 by National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP). Date Released: 12/9/2014.
Physical Activity and Academic Achievement
2014-12-09
This podcast highlights the evidence that supports the link between physical activity and improved academic achievement. It also identifies a few actions to support a comprehensive school physical activity program to improve academic achievement. Created: 12/9/2014 by National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP). Date Released: 12/9/2014.
Parental Involvement and Academic Achievement
Goodwin, Sarah Christine
2015-01-01
This research study examined the correlation between student achievement and parent's perceptions of their involvement in their child's schooling. Parent participants completed the Parent Involvement Project Parent Questionnaire. Results slightly indicated parents of students with higher level of achievement perceived less demand or invitations…
Poor Results for High Achievers
Bui, Sa; Imberman, Scott; Craig, Steven
2012-01-01
Three million students in the United States are classified as gifted, yet little is known about the effectiveness of traditional gifted and talented (G&T) programs. In theory, G&T programs might help high-achieving students because they group them with other high achievers and typically offer specially trained teachers and a more advanced…
Peer relationships and academic achievement
Krnjajić Stevan B.
2002-01-01
Full Text Available After their childhood, when children begin to establish more intensive social contacts outside family, first of all, in school setting, their behavior i.e. their social, intellectual, moral and emotional development is more strongly affected by their peers. Consequently, the quality of peer relationships considerably affects the process of adaptation and academic achievement and their motivational and emotional attitude towards school respectively. Empirical findings showed that there is bi-directional influence between peer relationships and academic achievement. In other words, the quality of peer relationships affects academic achievement, and conversely, academic achievement affects the quality of peer relationships. For example, socially accepted children exhibiting prosocial, cooperative and responsible forms of behavior in school most frequently have high academic achievement. On the other hand, children rejected by their peers often have lower academic achievement and are a risk group tending to delinquency, absenteeism and drop out of school. Those behavioral and interpersonal forms of competence are frequently more reliable predictors of academic achievement than intellectual abilities are. Considering the fact that various patterns of peer interaction differently exert influence on students' academic behavior, the paper analyzed effects of (a social competence, (b social acceptance/rejection, (c child's friendships and (d prosocial behavior on academic achievement.
Examination Regimes and Student Achievement
Cosentino de Cohen, Clemencia
2010-01-01
Examination regimes at the end of secondary school vary greatly intra- and cross-nationally, and in recent years have undergone important reforms often geared towards increasing student achievement. This research presents a comparative analysis of the relationship between examination regimes and student achievement in the OECD. Using a micro…
Parental Involvement and Academic Achievement
Goodwin, Sarah Christine
2015-01-01
This research study examined the correlation between student achievement and parent's perceptions of their involvement in their child's schooling. Parent participants completed the Parent Involvement Project Parent Questionnaire. Results slightly indicated parents of students with higher level of achievement perceived less demand or invitations…
Turbulent drag reduction through oscillating discs
Wise, Daniel J
2014-01-01
The changes of a turbulent channel flow subjected to oscillations of wall flush-mounted rigid discs are studied by means of direct numerical simulations. The Reynolds number is $R_\\tau$=$180$, based on the friction velocity of the stationary-wall case and the half channel height. The primary effect of the wall forcing is the sustained reduction of wall-shear stress, which reaches a maximum of 20%. A parametric study on the disc diameter, maximum tip velocity, and oscillation period is presented, with the aim to identify the optimal parameters which guarantee maximum drag reduction and maximum net energy saving, computed by taking into account the power spent to actuate the discs. This may be positive and reaches 6%. The Rosenblat viscous pump flow is used to predict the power spent for disc motion in the turbulent channel flow and to estimate localized and transient regions over the disc surface subjected to the turbulent regenerative braking effect, for which the wall turbulence exerts work on the discs. The...
On Achievable Performance of Cognitive Radio Systems
Haddad, Majed; Hayar, Aawatif Menouni
2007-01-01
In this contribution, we investigate the idea of using cognitive radio to reuse locally unused spectrum to increase the total system capacity. We consider a multiband/wideband system in which the primary and cognitive users wish to communicate to different receivers, subject to mutual interference and assume that each user knows only his channel and the unused spectrum through adequate sensing. Under this scheme, a cognitive radio will listen to the channel and, if sensed idle, will transmit during the voids. Within this setting, we provide two simple methods for sensing the idle sub-bands over the total bandwidth. We impose the constraint that users successively transmit over available bands through proper water filling. For the first time, our study has quantified the achievable gain of using cognitive radio with respect to classical radio devices and derive the total spectral efficiency as well as the maximum number of possible pairwise communications of such a cognitive radio system. We finally show that ...
AbouEisha, Hassan M.
2014-01-01
The problem of attribute reduction is an important problem related to feature selection and knowledge discovery. The problem of finding reducts with minimum cardinality is NP-hard. This paper suggests a new algorithm for finding exact reducts with minimum cardinality. This algorithm transforms the initial table to a decision table of a special kind, apply a set of simplification steps to this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. I present results of computer experiments for a collection of decision tables from UCIML Repository. For many of the experimented tables, the simplification steps solved the problem.
CARBON DIOXIDE REDUCTION SYSTEM.
CARBON DIOXIDE , *SPACE FLIGHT, RESPIRATION, REDUCTION(CHEMISTRY), RESPIRATION, AEROSPACE MEDICINE, ELECTROLYSIS, INSTRUMENTATION, ELECTROLYTES, VOLTAGE, MANNED, YTTRIUM COMPOUNDS, ZIRCONIUM COMPOUNDS, NICKEL.
Evaluation of Maximum O2 Consumption: Using Ergo-Spirometry in Severe Heart Failure
Majid Malekmohammad
2012-09-01
Full Text Available Although sport-physiologists have repeatedly analyzed respiratory gases through exercise, it is relatively new in the cardiovascular field and is obviously more acceptable than standard exercise test, which gives only information about the existence or absence of cardiovascular diseases (CVDs. Through the new method of exercise test, parameters including aerobic and anaerobic are checked and monitored. 22 severe cases of heart failure, who were candidates of heart transplantation, referring to Massih Daneshvari Hospital in Tehran from Nov. 2007 to Nov. 2008 enrolled this study. The study was designed as a cross-sectional performance and evaluated only patients with ejection fraction less than 30%. O2 mean consumption was 6.27±4.9 ml/kg/min at rest and 9.48±3.38 at anaerobic threshold (AT exceeding 13 ml/kg/min in maximum which was significantly more than the expected levels. Respiratory exchange ratio (RER was over 1 for all patients. This study could not find any statistical correlations between VO2 max and participants' ergonomic factors such as age, height, weight, BMI, as well as EF. This study showed no significant correlation between VO2 max and maximum heart rate (HR max, although O2 maximum consumption was rationally correlated with expiratory ventilation. This means that the patients achieved maximum ventilation through exercise in this study, but failed to have their maximum heart rate being led probably by HF-induced brady-arrhythmia or deconditioning of skeletal muscles.
Maximum Power Point Tracking of Photovoltaic System for Traffic Light Application
Riza Muhida
2013-07-01
Full Text Available Photovoltaic traffic light system is a significant application of renewable energy source. The development of the system is an alternative effort of local authority to reduce expenditure for paying fees to power supplier which the power comes from conventional energy source. Since photovoltaic (PV modules still have relatively low conversion efficiency, an alternative control of maximum power point tracking (MPPT method is applied to the traffic light system. MPPT is intended to catch up the maximum power at daytime in order to charge the battery at the maximum rate in which the power from the battery is intended to be used at night time or cloudy day. MPPT is actually a DC-DC converter that can step up or down voltage in order to achieve the maximum power using Pulse Width Modulation (PWM control. From experiment, we obtained the voltage of operation using MPPT is at 16.454 V, this value has error of 2.6%, if we compared with maximum power point voltage of PV module that is 16.9 V. Based on this result it can be said that this MPPT control works successfully to deliver the power from PV module to battery maximally.
Modelling the maximum voluntary joint torque/angular velocity relationship in human movement.
Yeadon, Maurice R; King, Mark A; Wilson, Cassie
2006-01-01
The force exerted by a muscle is a function of the activation level and the maximum (tetanic) muscle force. In "maximum" voluntary knee extensions muscle activation is lower for eccentric muscle velocities than for concentric velocities. The aim of this study was to model this "differential activation" in order to calculate the maximum voluntary knee extensor torque as a function of knee angular velocity. Torque data were collected on two subjects during maximal eccentric-concentric knee extensions using an isovelocity dynamometer with crank angular velocities ranging from 50 to 450 degrees s(-1). The theoretical tetanic torque/angular velocity relationship was modelled using a four parameter function comprising two rectangular hyperbolas while the activation/angular velocity relationship was modelled using a three parameter function that rose from submaximal activation for eccentric velocities to full activation for high concentric velocities. The product of these two functions gave a seven parameter function which was fitted to the joint torque/angular velocity data, giving unbiased root mean square differences of 1.9% and 3.3% of the maximum torques achieved. Differential activation accounts for the non-hyperbolic behaviour of the torque/angular velocity data for low concentric velocities. The maximum voluntary knee extensor torque that can be exerted may be modelled accurately as the product of functions defining the maximum torque and the maximum voluntary activation level. Failure to include differential activation considerations when modelling maximal movements will lead to errors in the estimation of joint torque in the eccentric phase and low velocity concentric phase.
Strategies for poverty reduction
Øyen, Else
2003-01-01
SIU konferanse Solstrand 6.-7. October 2003 Higher education has a value of its own. When linked to the issue of poverty reduction it is necessary to ask another set of questions, including the crutial one whether higher education in general is the best tool for poverty reduction.
Strategies for poverty reduction
Øyen, Else
2003-01-01
SIU konferanse Solstrand 6.-7. October 2003 Higher education has a value of its own. When linked to the issue of poverty reduction it is necessary to ask another set of questions, including the crutial one whether higher education in general is the best tool for poverty reduction.
Su-Qing Han; Jue Wang
2004-01-01
Based on the principle of discernibility matrix,a kind of reduction algorithm with attribute order has been developed and its solution has been proved to be complete for reduct and unique for a given attribute order.Being called the reduct problem,this algorithm can be regarded as a mapping R = Reduct(S)from the attribute order space θ to the reduct space R for an information system ,where U is the universe and C and D are two sets of condition and decision attributes respectively.This paper focuses on the reverse problem of reduct problem S = Order(R),i.e.,for a given reduct R of an information system,we determine the solution of S = Order(R)in the space θ.First,we need to prove that there is at least one attribute order S such that S = Order(R).Then,some decision rules are proposed,which can be used directly to decide whether the pair of attribute orders has the same reduct.The main method is based on the fact that an attribute order can be transformed into another one by moving the attribute for limited times.Thus,the decision of the pair of attribute orders can be altered to the decision of the sequence of neighboring pairs of attribute orders.Therefore,the basic theorem of neighboring pair of attribute orders is first proved,then,the decision theorem of attribute order is proved accordingly by the second attribute.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...
49 CFR 174.86 - Maximum allowable operating speed.
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15...
30 CFR 56.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 56.19066 Section 56.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 56.19066 Maximum riders in a conveyance. In shafts inclined over 45...
30 CFR 57.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 57.19066 Section 57.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 57.19066 Maximum riders in a conveyance. In shafts inclined over 45...
Maximum Atmospheric Entry Angle for Specified Retrofire Impulse
T. N. Srivastava
1969-07-01
Full Text Available Maximum atmospheric entry angles for vehicles initially moving in elliptic orbits are investigated and it is shown that tangential retrofire impulse at the apogee results in the maximum entry angle. Equivalence of maximizing the entry angle and minimizing the retrofire impulse is also established.
5 CFR 838.711 - Maximum former spouse survivor annuity.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the...
46 CFR 151.45-6 - Maximum amount of cargo.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Maximum amount of cargo. 151.45-6 Section 151.45-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES BARGES CARRYING BULK LIQUID HAZARDOUS MATERIAL CARGOES Operations § 151.45-6 Maximum amount of cargo. (a)...
49 CFR 195.406 - Maximum operating pressure.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a...
Maximum-entropy clustering algorithm and its global convergence analysis
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Distribution of maximum loss of fractional Brownian motion with drift
Çağlar, Mine; Vardar-Acar, Ceren
2013-01-01
In this paper, we find bounds on the distribution of the maximum loss of fractional Brownian motion with H >= 1/2 and derive estimates on its tail probability. Asymptotically, the tail of the distribution of maximum loss over [0, t] behaves like the tail of the marginal distribution at time t.
48 CFR 436.575 - Maximum workweek-construction schedule.
2010-10-01
...-construction schedule. 436.575 Section 436.575 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE... Maximum workweek-construction schedule. The contracting officer shall insert the clause at 452.236-75, Maximum Workweek-Construction Schedule, if the clause at FAR 52.236-15 is used and the contractor's...
30 CFR 57.5039 - Maximum permissible concentration.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum permissible concentration. 57.5039... Maximum permissible concentration. Except as provided by standard § 57.5005, persons shall not be exposed to air containing concentrations of radon daughters exceeding 1.0 WL in active workings. ...
5 CFR 550.105 - Biweekly maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Biweekly maximum earnings limitation. 550.105 Section 550.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.105 Biweekly...
5 CFR 550.106 - Annual maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Annual maximum earnings limitation. 550.106 Section 550.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.106 Annual...
32 CFR 842.35 - Depreciation and maximum allowances.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide”...
Isothermal reduction of titanomagnetite concentrates containing coal
Tu Hu; Xue-wei Lü; Chen-guang Bai; Gui-bao Qiu
2014-01-01
The isothermal reduction of the Panzhihua titanomagnetite concentrates (PTC) briquette containing coal under argon atmosphere was investigated by thermogravimetry in an electric resistance furnace within the temperature range of 1250-1350°C. The samples reduced in argon at 1350°C for different time were examined by X-ray diffraction (XRD) analysis. Model-fitting and model-free methods were used to evaluate the apparent activation energy of the reduction reaction. It is found that the reduction rate is very fast at the early stage, and then, at a later stage, the reduction rate becomes slow and decreases gradually to the end of the reduction. It is also observed that the reduction of PTC by coal depends greatly on the temperature. At high temperatures, the reduction degree reaches high values faster and the final value achieved is higher than at low temperatures. The final phase composition of the reduced PTC-coal briquette consists in iron and fer-rous-pseudobrookite (FeTi2O5), while Fe2.75Ti0.25O4, Fe2.5Ti0.5O4, Fe2.25Ti0.75O4, ilmenite (FeTiO3) and wustite (FeO) are intermediate products. The reaction rate is controlled by the phase boundary reaction for reduction degree less than 0.2 with an apparent activation energy of about 68 kJ·mol-1 and by three-dimensional diffusion for reduction degree greater than 0.75 with an apparent activation energy of about 134 kJ·mol-1. For the reduction degree in the range of 0.2-0.75, the reaction rate is under mixed control, and the activation energy increases with the increase of the reduction degree.
Marcelo Matida Hamata
2009-02-01
Full Text Available Fabrication of occlusal splints in centric relation for temporomandibular disorders (TMD patients is arguable, since this position has been defined for asymptomatic stomatognathic system. Thus, maximum intercuspation might be employed in patients with occlusal stability, eliminating the need for interocclusal records. This study compared occlusal splints fabricated in centric relation and maximum intercuspation in muscle pain reduction of TMD patients. Twenty patients with TMD of myogenous origin and bruxism were divided into 2 groups treated with splints in maximum intercuspation (I or centric relation (II. Clinical, electrognathographic and electromyographic examinations were performed before and 3 months after therapy. Data were analyzed by the Student's t test. Differences at 5% level of probability were considered statistically significant. There was a remarkable reduction in pain symptomatology, without statistically significant differences (p>0.05 between the groups. There was mandibular repositioning during therapy, as demonstrated by the change in occlusal contacts on the splints. Electrognathographic examination demonstrated a significant increase in maximum left lateral movement for group I and right lateral movement for group II (p0.05 in the electromyographic activities at rest after utilization of both splints. In conclusion, both occlusal splints were effective for pain control and presented similar action. The results suggest that maximum intercuspation may be used for fabrication of occlusal splints in patients with occlusal stability without large discrepancies between centric relation and maximum intercuspation. Moreover, this technique is simpler and less expensive.
无
2009-01-01
This paper analyzes the constraint factors of forestry scientific and technological achievements transformation and explores the countermeasures and measures to promote the transformation. It points out that to achieve maximum return from investment funds in forestry research, it shall improve the transformation of scientific and technological achievements, enhance independent innovation capability, and greatly enhance the supply capacity of scientific and technological achievements, so as to provide endles...
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Experimental study on prediction model for maximum rebound ratio
LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong
2007-01-01
The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.
Emission reductions and urban ozone responses under more stringent US standards
Downey, Nicole; Emery, Chris; Jung, Jaegun; Sakulyanontvittaya, Tanarit; Hebert, Laura; Blewitt, Doug; Yarwood, Greg
2015-01-01
We use a photochemical grid model instrumented with the high-order Decoupled Direct Method (HDDM) to evaluate the response of ozone (O3) to reductions in US-wide anthropogenic emissions, and to estimate emission reductions necessary to meet more stringent National Ambient Air Quality Standards (NAAQS) for O3. We simulate hourly O3 response to nationwide reductions in nitrogen oxides (NOx) and volatile organic compound (VOC) emissions throughout 2006 and compare O3 responses in 4 US cities: Los Angeles, Sacramento, St. Louis, and Philadelphia. We compare O3 responses between NOx-rich, O3-inhibited urban core sites and NOx-sensitive, higher O3 suburban sites and analyze projected O3 frequency distributions, which can be used to drive health effect models. We find that 2006 anthropogenic NOx and VOC emissions must be reduced by 60-70% to reach annual 4th highest (H4) maximum daily 8-h (MDA8) O3 of 75 ppb (the current US standard) in Sacramento, St. Louis, and Philadelphia, and by 80-85% to reach an H4 MDA8 of 60 ppb. Los Angeles requires larger emissions reductions and achieves an H4 MDA8 of 75 ppb with 92% reductions and 60 ppb with 97% reductions. As emissions are reduced, hourly and MDA8 frequency distributions tend toward mid-level background distributions. Mid-level O3 exposure is an important driver of O3 health impacts calculated by epidemiological models. A significant fraction (at least 48%) of summertime integrated MDA8 O3 at all sites remains after complete elimination of US anthropogenic NOx and VOC emissions, implying that mid-level O3 exposure due to background will become more important as domestic precursor emissions are controlled.
Childhood Obesity and Cognitive Achievement.
Black, Nicole; Johnston, David W; Peeters, Anna
2015-09-01
Obese children tend to perform worse academically than normal-weight children. If poor cognitive achievement is truly a consequence of childhood obesity, this relationship has significant policy implications. Therefore, an important question is to what extent can this correlation be explained by other factors that jointly determine obesity and cognitive achievement in childhood? To answer this question, we exploit a rich longitudinal dataset of Australian children, which is linked to national assessments in math and literacy. Using a range of estimators, we find that obesity and body mass index are negatively related to cognitive achievement for boys but not girls. This effect cannot be explained by sociodemographic factors, past cognitive achievement or unobserved time-invariant characteristics and is robust to different measures of adiposity. Given the enormous importance of early human capital development for future well-being and prosperity, this negative effect for boys is concerning and warrants further investigation. Copyright © 2015 John Wiley & Sons, Ltd.
sign hip construct: achieving hip fracture fixation without using an ...
Objective: The aim of this study was to assess outcomes of using the SIGN Hip Construct (SHC) to achieve ... The majority (76%) of patients were ambulatory within. 3 days after the surgery. ... Conclusion: Using the SIGN Hip Construct, hip fracture fixation can be ... elderly has made stable reduction and internal fixation.
Housing Affordability And Children's Cognitive Achievement.
Newman, Sandra; Holupka, C Scott
2016-11-01
Housing cost burden-the fraction of income spent on housing-is the most prevalent housing problem affecting the healthy development of millions of low- and moderate-income children. By affecting disposable income, a high burden affects parents' expenditures on both necessities for and enrichment of their children, as well as investments in their children. Reducing those expenditures and investments, in turn, can affect children's development, including their cognitive skills and physical, social, and emotional health. This article summarizes the first empirical evidence of the effects of housing affordability on children's cognitive achievement and on one factor that appears to contribute to these effects: the larger expenditures on child enrichment by families in affordable housing. We found that housing cost burden has the same relationship to both children's cognitive achievement and enrichment spending on children, exhibiting an inverted U shape in both cases. The maximum benefit occurs when housing cost burden is near 30 percent of income-the long-standing rule-of-thumb definition of affordable housing. The effect of the burden is stronger on children's math ability than on their reading comprehension and is more pronounced with burdens above the 30 percent standard. For enrichment spending, the curve is "shallower" (meaning the effect of optimal affordability is less pronounced) but still significant.
Drag Reduction by Microvortexes in Transverse Microgrooves
Bao Wang
2014-07-01
Full Text Available A transverse microgrooved surface was employed here to reduce the surface drag force by creating a slippage in bottom layer in turbulent boundary layer. A detailed simulation and experimental investigation on drag reduction by transverse microgrooves were given. The computational fluid dynamics simulation, using RNG k-ε turbulent model, showed that the vortexes were formed in the grooves and they were a main reason for the drag reduction. On the upside of the vortex, the revolving direction was consistent with the main flow, which decreased the flow shear stress by declining the velocity gradient. The experiments were carried out in a high-speed water tunnel with flow velocity varying from 17 to 19 m/s. The experimental results showed that the drag reduction was about 13%. Therefore, the computational and experimental results were cross-checked and consistent with each other to prove that the presented approach achieved effective drag reduction underwater.
2010-07-01
... as specified in 40 CFR 1065.610. This is the maximum in-use engine speed used for calculating the NOX... procedures of 40 CFR part 1065, based on the manufacturer's design and production specifications for the..., power density, and maximum in-use engine speed. 1042.140 Section 1042.140 Protection of...
Chen, Jun; Li, Yan; Hao, Hong-hong; Zheng, Ji; Chen, Jian-meng
2015-02-01
The reduction of Fe(II)EDTA-NO is one of the core processes in BioDeNOx, an integrated physicochemical and biological technique for NOx removal from industrial flue gases. A newly isolated thermophilic Anoxybacillus sp. HA, identified by 16S rRNA sequence analysis, could simultaneously reduce Fe(II)EDTA-NO and Fe(III)EDTA. A maximum NO removal efficiency of 98.7% was achieved when 3mM Fe(II)EDTA-NO was used in the nutrient solution at 55°C. Results of this study strongly indicated that the biological oxidation of Fe(II)EDTA played an important role in the formation of Fe(III)EDTA in the anaerobic system. Fe(II)EDTA-NO was more competitive than Fe(III)EDTA as an electron acceptor, and the presence of Fe(III)EDTA slightly affected the reduction rate of Fe(II)EDTA-NO. At 55°C, the maximum microbial specific growth rate μmax reached the peak value of 0.022h(-1). The maximum NO removal efficiency was also measured (95.4%) under this temperature. Anoxybacillus sp. HA, which grew well at 50°C-60°C, is a potential microbial resource for Fe(II)EDTA-NO reduction at thermophilic temperatures.
NONE
1999-03-01
Nitrogen monoxide is a strong greenhouse effect gas having warming up index per molecule 300 times greater than that of CO2, and is designated as the object of reduction in the Kyoto Conference. The present preceding research discusses necessity of performing research and development works related to reducing the emission of nitrogen monoxide, and if it is necessary, places the final objective on proposition of what researches should be planned. Fiscal 1997 being the first fiscal year of the preceding research has surveyed emission amount from different emission sources, and enumerated the research and development assignments. Fiscal 1998 falling under the final fiscal year summarizes the emission amount including the future trends, surveys the feasibility of the promising technological measures through experiments, and proposed finally a research and development plan desired of implementation in the future. The proposal contains a research plan placing development of nitrogen monoxide decomposing catalysts and automobile catalysts as the main objectives. Among the domestic nitrogen monoxide generating sources, about 2/3 is the man-made generation sources, hence catalysts, if developed, may be applied to such facilities as combustion furnaces. (NEDO)
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
CO2 maximum in the oxygen minimum zone (OMZ
V. Garçon
2011-02-01
Full Text Available Oxygen minimum zones (OMZs, known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2−1 in the open ocean. To achieve this, we examine simultaneous DIC and O2 data collected off Chile during 4 cruises (2000–2002 and a monthly monitoring (2000–2001 in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg−1, up to 2350 μmol kg−1 have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ. Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%, meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios. This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect and occurring upstream in warm waters (e.g., in the Equatorial Divergence
Growth and maximum size of tiger sharks (Galeocerdo cuvier in Hawaii.
Carl G Meyer
Full Text Available Tiger sharks (Galecerdo cuvier are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL, with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W, in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km, after 366 days at liberty (DAL. We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured. We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.
Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N
2014-01-01
Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
A Method of Attribute Reduction Based on Rough Set
LI Chang-biao; SONG Jian-ping
2005-01-01
The logging attribute optimization is an important task in the well-logging interpretation.A method of attribute reduction is presented based on rough set. Firstly, the core information of the sample by a general reductive method is determined. Then, the significance of dispensable attribute in the reduction-table is calculated. Finally, the minimum relative reduction set is achieved. The typical calculation and quantitative computation of reservoir parameter in oil logging show that the method of attribute reduction is greatly effective and feasible in logging interpretation.
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Maximum frequency of the decametric radiation from Jupiter
Barrow, C. H.; Alexander, J. K.
1980-01-01
The upper frequency limits of Jupiter's decametric radio emission are found to be essentially the same when observed from the earth or, with considerably higher sensitivity, from the Voyager spacecraft close to Jupiter. This suggests that the maximum frequency is a real cut-off corresponding to a maximum gyrofrequency of about 38-40 MHz at Jupiter. It no longer appears to be necessary to specify different cut-off frequencies for the Io and non-Io emission as the maximum frequencies are roughly the same in each case.
Selective preparation of the maximum coherent superposition state in four-level atoms
Li Deng; Yueping Niu; Shangqing Gong
2011-01-01
We demonstrate that the maximum coherent superposition state can be selectively prepared using a sequence of pulse pairs in lambda-type atomic systems, with the final level as a doublet. In each pair, the Stocks pulse comes before the pump pulse, with their back edges overlapping. Numerical results indicate that by tuning the interval of the adjacent pulse pairs, the selective maximum coherent superposition state preparation between the initial and one of the final levels can be achieved. The phenomenon is caused by the accumulative property of the pulse sequence.%The coherent superposition state in atoms or molecules plays a crucial role in quantum physics.It has applications in many areas such as electromagnetically induced transparency[1-5],quantum information[6-8] and control of chemical reaction[9-11].Many schemes can prepare the coherent superposition state.For instance,the fractional stimulated Raman adiabatic passage(F-STIRAP) [12] and the coherent population trapping[13] can obtain the maximum coherent superposition state of the two lower levels in lambda-type atoms.Our group also proposed several schemes to achieve this goal,such as the methods based on the STIRAP[14,15] and the pulse train method[16].
Exemplar pediatric collaborative improvement networks: achieving results.
Billett, Amy L; Colletti, Richard B; Mandel, Keith E; Miller, Marlene; Muething, Stephen E; Sharek, Paul J; Lannon, Carole M
2013-06-01
A number of pediatric collaborative improvement networks have demonstrated improved care and outcomes for children. Regionally, Cincinnati Children's Hospital Medical Center Physician Hospital Organization has sustained key asthma processes, substantially increased the percentage of their asthma population receiving "perfect care," and implemented an innovative pay-for-performance program with a large commercial payor based on asthma performance measures. The California Perinatal Quality Care Collaborative uses its outcomes database to improve care for infants in California NICUs. It has achieved reductions in central line-associated blood stream infections (CLABSI), increased breast-milk feeding rates at hospital discharge, and is now working to improve delivery room management. Solutions for Patient Safety (SPS) has achieved significant improvements in adverse drug events and surgical site infections across all 8 Ohio children's hospitals, with 7700 fewer children harmed and >$11.8 million in avoided costs. SPS is now expanding nationally, aiming to eliminate all events of serious harm at children's hospitals. National collaborative networks include ImproveCareNow, which aims to improve care and outcomes for children with inflammatory bowel disease. Reliable adherence to Model Care Guidelines has produced improved remission rates without using new medications and a significant increase in the proportion of Crohn disease patients not taking prednisone. Data-driven collaboratives of the Children's Hospital Association Quality Transformation Network initially focused on CLABSI in PICUs. By September 2011, they had prevented an estimated 2964 CLABSI, saving 355 lives and $103,722,423. Subsequent improvement efforts include CLABSI reductions in additional settings and populations.
Lymphedema Risk Reduction Practices
... LSAP Perspective (9) 2017 NLN International Conference Position Paper: Lymphedema Risk Reduction Practices Category: Position Papers Tags: ... and water, pat dry, then apply a topical antibacterial. d. Wear non-constricting protective gear over the ...
Gates,, S James; Stiffler, Kory
2013-01-01
We show performing general ``0-brane reduction'' along an arbitrary fixed direction in spacetime and applied to the starting point of minimal, off-shell 4D, $\\cal N$ $=$ 1 irreducible supermultiplets, yields adinkras whose adjacency matrices are among some of the special cases proposed by Kuznetsova, Rojas, and Toppan. However, these more general reductions also can lead to `Garden Algebra' structures beyond those described in their work. It is also shown that for light-like directions, reduction to the 0-brane breaks the equality in the number of fermions and bosons for dynamical theories. This implies that light-like reductions should be done instead to the space of 1-branes or equivalently to the worldsheet.
... breastfeeding: A systematic review. Journal of Plastic, Reconstructive & Aesthetic Surgery. 2010;63:1688. Kerrigan CL, et al. Evidence-based medicine: Reduction mammoplasty. Plastic and Reconstructive Surgery. 2013;132: ...
None
2017-03-01
Hybrid utility trucks, with auxiliary power sources for on-board equipment, significantly reduce unnecessary idling resulting in fuel costs savings, less engine wear, and reduction in noise and emissions.
Dust fluxes and iron fertilization in Holocene and Last Glacial Maximum climates
Lambert, Fabrice; Tagliabue, Alessandro; Shaffer, Gary; Lamy, Frank; Winckler, Gisela; Farias, Laura; Gallardo, Laura; De Pol-Holz, Ricardo
2015-07-01
Mineral dust aerosols play a major role in present and past climates. To date, we rely on climate models for estimates of dust fluxes to calculate the impact of airborne micronutrients on biogeochemical cycles. Here we provide a new global dust flux data set for Holocene and Last Glacial Maximum (LGM) conditions based on observational data. A comparison with dust flux simulations highlights regional differences between observations and models. By forcing a biogeochemical model with our new data set and using this model's results to guide a millennial-scale Earth System Model simulation, we calculate the impact of enhanced glacial oceanic iron deposition on the LGM-Holocene carbon cycle. On centennial timescales, the higher LGM dust deposition results in a weak reduction of atmospheric CO2 due to enhanced efficiency of the biological pump. This is followed by a further ~10 ppm reduction over millennial timescales due to greater carbon burial and carbonate compensation.
Dissimilatory metal reduction.
Lovley, D R
1993-01-01
Microorganisms can enzymatically reduce a variety of metals in metabolic processes that are not related to metal assimilation. Some microorganisms can conserve energy to support growth by coupling the oxidation of simple organic acids and alcohols, H2, or aromatic compounds to the reduction of Fe(III) or Mn(IV). This dissimilatory Fe(III) and Mn(IV) reduction influences the organic as well as the inorganic geochemistry of anaerobic aquatic sediments and ground water. Microorganisms that use U(VI) as a terminal electron acceptor play an important role in uranium geochemistry and may be a useful tool for removing uranium from contaminated environments. Se(VI) serves as a terminal electron acceptor to support anaerobic growth of some microorganisms. Reduction of Se(VI) to Se(O) is an important mechanism for the precipitation of selenium from contaminated waters. Enzymatic reduction of Cr(VI) to the less mobile and less toxic Cr(III), and reduction of soluble Hg(II) to volatile Hg(O) may affect the fate of these compounds in the environment and might be used as a remediation strategy. Microorganisms can also enzymatically reduce other metals such as technetium, vanadium, molybdenum, gold, silver, and copper, but reduction of these metals has not been studied extensively.
Hexavalent chromium reduction and energy recovery by using dual-chambered microbial fuel cell.
Gangadharan, Praveena; Nambi, Indumathi M
2015-01-01
Microbial fuel cell (MFC) technology is utilized to treat hexavalent chromium (Cr(VI)) from wastewater and to generate electricity simultaneously. The Cr(VI) is bioelectrochemically reduced to non-toxic Cr(III) form in the presence of an organic electron donor in a dual-chambered MFC. The Cr(VI) as catholyte and artificial wastewater inoculated with anaerobic sludge as anolyte, Cr(VI) at 100 mg/L was completely removed within 48 h (initial pH value 2.0). The total amount of Cr recovered was 99.87% by the precipitation of Cr(III) on the surface of the cathode. In addition to that 78.4% of total organic carbon reduction was achieved at the anode chamber within 13 days of operation. Furthermore, the maximum power density of 767.01 mW/m² (2.08 mA/m²) was achieved by MFCs at ambient conditions. The present work has successfully demonstrated the feasibility of using MFCs for simultaneous energy production from wastewater and reduction of toxic Cr(VI) to non-toxic Cr(III).
Evaluation of the Advanced Subsonic Technology Program Noise Reduction Benefits
Golub, Robert A.; Rawls, John W., Jr.; Russell, James W.
2005-01-01
This report presents a detailed evaluation of the aircraft noise reduction technology concepts developed during the course of the NASA/FAA Advanced Subsonic Technology (AST) Noise Reduction Program. In 1992, NASA and the FAA initiated a cosponsored, multi-year program with the U.S. aircraft industry focused on achieving significant advances in aircraft noise reduction. The program achieved success through a systematic development and validation of noise reduction technology. Using the NASA Aircraft Noise Prediction Program, the noise reduction benefit of the technologies that reached a NASA technology readiness level of 5 or 6 were applied to each of four classes of aircraft which included a large four engine aircraft, a large twin engine aircraft, a small twin engine aircraft and a business jet. Total aircraft noise reductions resulting from the implementation of the appropriate technologies for each class of aircraft are presented and compared to the AST program goals.
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling; Wang Jun
2013-01-01
In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain...
Maximum Principle for Nonlinear Cooperative Elliptic Systems on IR N
LEADI Liamidi; MARCOS Aboubacar
2011-01-01
We investigate in this work necessary and sufficient conditions for having a Maximum Principle for a cooperative elliptic system on the whole (IR)N.Moreover,we prove the existence of solutions by an approximation method for the considered system.
Maximum Likelihood Factor Structure of the Family Environment Scale.
Fowler, Patrick C.
1981-01-01
Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)
Multiresolution maximum intensity volume rendering by morphological adjunction pyramids
Roerdink, Jos B.T.M.
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Multiresolution Maximum Intensity Volume Rendering by Morphological Adjunction Pyramids
Roerdink, Jos B.T.M.
2001-01-01
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Changes in context and perception of maximum reaching height.
Wagman, Jeffrey B; Day, Brian M
2014-01-01
Successfully performing a given behavior requires flexibility in both perception and behavior. In particular, doing so requires perceiving whether that behavior is possible across the variety of contexts in which it might be performed. Three experiments investigated how (changes in) context (ie point of observation and intended reaching task) influenced perception of maximum reaching height. The results of experiment 1 showed that perceived maximum reaching height more closely reflected actual reaching ability when perceivers occupied a point of observation that was compatible with that required for the reaching task. The results of experiments 2 and 3 showed that practice perceiving maximum reaching height from a given point of observation improved perception of maximum reaching height from a different point of observation, regardless of whether such practice occurred at a compatible or incompatible point of observation. In general, such findings show bounded flexibility in perception of affordances and are thus consistent with a description of perceptual systems as smart perceptual devices.
Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)
U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...