WorldWideScience

Sample records for maximum percent decrease

  1. 7 CFR 762.129 - Percent of guarantee and maximum loss.

    Science.gov (United States)

    2010-01-01

    ... loss. (a) General. The percent of guarantee will not exceed 90 percent based on the credit risk to the lender and the Agency both before and after the transaction. The Agency will determine the percentage of... PLP lenders will not be less than 80 percent. (d) Maximum loss. The maximum amount the Agency will pay...

  2. Experience with high percent step load decrease from full power in NPP Krsko

    International Nuclear Information System (INIS)

    Vukovic, V.

    2000-01-01

    The control system of NPP Kriko, is designed to automatically control the reactor in the power range between 15 and 100 percent of rated power for the following designed transients; - 10 percent step change in load; 5 percent per minute loading and unloading; step full load decrease with the aid of automatically initiated and controlled steam dump. Because station operation below 15 percent of rated power is designed for a period of time during startup or standby conditions, automatic control below 15 percent is not provided. The steam dump accomplishes the following functional tasks: it permits the nuclear plants to accept a sudden 95 percent loss of load without incurring reactor trip; it removes stored energy and residual heat following a reactor trip and brings the plant to equilibrium no-load conditions without actuation of the steam generator safety valves; it permits control of the steam generator pressure at no-load conditions and permits a manually controlled cooldown of the plant. The first two functional tasks are controlled by Tavg. The third is controlled by steam pressure. Interlocks minimise any possibility of an inadvertent actuation of steam dump system. This paper discusses relationships between designed (described) characteristics of plant and the data which are obtained during startup and/or first ten years of operation. (author)

  3. Lack of CD4+ T cell percent decrease in alemtuzumab-treated multiple sclerosis patients with persistent relapses.

    Science.gov (United States)

    Rolla, Simona; De Mercanti, Stefania Federica; Bardina, Valentina; Horakova, Dana; Habek, Mario; Adamec, Ivan; Cocco, Eleonora; Annovazzi, Pietro; Vladic, Anton; Novelli, Francesco; Durelli, Luca; Clerico, Marinella

    2017-12-15

    Alemtuzumab, a highly effective treatment for relapsing remitting multiple sclerosis (RRMS), induces lymphopenia especially of CD4+ T cells. Here, we report the atypical CD4+ T population behaviour of two patients with persistent disease activity despite repeated alemtuzumab treatments. Whereas lymphocytes count decreased and fluctuated accordingly to alemtuzumab administration, their CD4+ cell percentage was not or just mildly affected and was slightly below the lowest normal limit already before alemtuzumab. These cases anticipate further studies aimed to investigate whether the evaluation of the CD4+ cell percentage could represent a helpful tool to address the individual clinical response to alemtuzumab. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Percent Coverage

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Percent Coverage is a spreadsheet that keeps track of and compares the number of vessels that have departed with and without observers to the numbers of vessels...

  5. Weakly and strongly polynomial algorithms for computing the maximum decrease in uniform arc capacities

    Directory of Open Access Journals (Sweden)

    Ghiyasvand Mehdi

    2016-01-01

    Full Text Available In this paper, a new problem on a directed network is presented. Let D be a feasible network such that all arc capacities are equal to U. Given a t > 0, the network D with arc capacities U - t is called the t-network. The goal of the problem is to compute the largest t such that the t-network is feasible. First, we present a weakly polynomial time algorithm to solve this problem, which runs in O(log(nU maximum flow computations, where n is the number of nodes. Then, an O(m2n time approach is presented, where m is the number of arcs. Both the weakly and strongly polynomial algorithms are inspired by McCormick and Ervolina (1994.

  6. Optimization of hip joint replacement location to decrease maximum von Mi ses Stress

    International Nuclear Information System (INIS)

    Pourjamali, H.; Najarian, S.; Katoozian, H. R.

    2001-01-01

    Hip replacement is used for inoperable femur head injuries and femur fractures where internal fixation can not be used. This operation is one of the most common orthopedic operations that many research have been done about it. Among these we can mention implant and cement materials and composites optimization and also implant shape optimization. This study was designed to optimize artificial hip joint position (placement) to decrease maximal von mi sees stress. First, a model of femur and implant were made and then a computer program was written with the ability to change the position of implant through an acceptable range in the femur. In each of these positions, the program simulated femur and implant according to finite element method and made, applied forces were weight and muscle traction. Our findings show that a small deviation of the implant from femur bone center causes a considerable decrease in von mi sees stress that consequently results in longer maintenance of the implant

  7. An increased rectal maximum tolerable volume and long anal canal are associated with poor short-term response to biofeedback therapy for patients with anismus with decreased bowel frequency and normal colonic transit time.

    Science.gov (United States)

    Rhee, P L; Choi, M S; Kim, Y H; Son, H J; Kim, J J; Koh, K C; Paik, S W; Rhee, J C; Choi, K W

    2000-10-01

    Biofeedback is an effective therapy for a majority of patients with anismus. However, a significant proportion of patients still failed to respond to biofeedback, and little has been known about the factors that predict response to biofeedback. We evaluated the factors associated with poor response to biofeedback. Biofeedback therapy was offered to 45 patients with anismus with decreased bowel frequency (less than three times per week) and normal colonic transit time. Any differences in demographics, symptoms, and parameters of anorectal physiologic tests were sought between responders (in whom bowel frequency increased up to three times or more per week after biofeedback) and nonresponders (in whom bowel frequency remained less than three times per week). Thirty-one patients (68.9 percent) responded to biofeedback and 14 patients (31.1 percent) did not. Anal canal length was longer in nonresponders than in responders (4.53 +/- 0.5 vs. 4.08 +/- 0.56 cm; P = 0.02), and rectal maximum tolerable volume was larger in nonresponders than in responders. (361 +/- 87 vs. 302 +/- 69 ml; P = 0.02). Anal canal length and rectal maximum tolerable volume showed significant differences between responders and nonresponders on multivariate analysis (P = 0.027 and P = 0.034, respectively). This study showed that a long anal canal and increased rectal maximum tolerable volume are associated with poor short-term response to biofeedback for patients with anismus with decreased bowel frequency and normal colonic transit time.

  8. Percent Wetland Cover

    Data.gov (United States)

    U.S. Environmental Protection Agency — Wetlands act as filters, removing or diminishing the amount of pollutants that enter surface water. Higher values for percent of wetland cover (WETLNDSPCT) may be...

  9. Percent Wetland Cover (Future)

    Data.gov (United States)

    U.S. Environmental Protection Agency — Wetlands act as filters, removing or diminishing the amount of pollutants that enter surface water. Higher values for percent of wetland cover (WETLNDSPCT) may be...

  10. Inspiration: One Percent and Rising

    Science.gov (United States)

    Walling, Donovan R.

    2009-01-01

    Inventor Thomas Edison once famously declared, "Genius is one percent inspiration and ninety-nine percent perspiration." If that's the case, then the students the author witnessed at the International Student Media Festival (ISMF) last November in Orlando, Florida, are geniuses and more. The students in the ISMF pre-conference workshop…

  11. The Algebra of the Cumulative Percent Operation.

    Science.gov (United States)

    Berry, Andrew J.

    2002-01-01

    Discusses how to help students avoid some pervasive reasoning errors in solving cumulative percent problems. Discusses the meaning of ."%+b%." the additive inverse of ."%." and other useful applications. Emphasizes the operational aspect of the cumulative percent concept. (KHR)

  12. A Conceptual Model for Solving Percent Problems.

    Science.gov (United States)

    Bennett, Albert B., Jr.; Nelson, L. Ted

    1994-01-01

    Presents an alternative method to teaching percent problems which uses a 10x10 grid to help students visualize percents. Offers a means of representing information and suggests different approaches for finding solutions. Includes reproducible student worksheet. (MKR)

  13. Beyond Marbles: Percent Change and Social Justice

    Science.gov (United States)

    Denny, Flannery

    2013-01-01

    In the author's eighth year of teaching, she hit a wall teaching percent change. Percent change is one of the few calculations taught in math classes that shows up regularly in the media, and one that she often does in her head to make sense of the world around her. Despite this, she had been teaching percent change using textbook problems about…

  14. Use of biofuels in road transport decreases

    International Nuclear Information System (INIS)

    Segers, R.

    2011-01-01

    The use of biofuels decreased from 3.5 percent, for all gasoline and diesel used by road transport in 2009, to 2 percent in 2010. Particularly the use of biodiesel decreased, dropping from 3.5 to 1.5 percent. The use of biogasoline remained stable, catering for 3 percent of all gasoline use. [nl

  15. 9 CFR 381.168 - Maximum percent of skin in certain poultry products.

    Science.gov (United States)

    2010-01-01

    ... poultry products. 381.168 Section 381.168 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY ORGANIZATION AND TERMINOLOGY; MANDATORY MEAT AND POULTRY PRODUCTS INSPECTION AND VOLUNTARY INSPECTION AND CERTIFICATION POULTRY PRODUCTS INSPECTION REGULATIONS Definitions and...

  16. An evaluation of 10 percent and 20 percent benzocaine gels in patients with acute toothaches

    Science.gov (United States)

    Hersh, Elliot V.; Ciancio, Sebastian G.; Kuperstein, Arthur S.; Stoopler, Eric T.; Moore, Paul A.; Boynes, Sean G.; Levine, Steven C.; Casamassimo, Paul; Leyva, Rina; Mathew, Tanya; Shibly, Othman; Creighton, Paul; Jeffers, Gary E.; Corby, Patricia M.A.; Turetzky, Stanley N.; Papas, Athena; Wallen, Jillian; Idzik-Starr, Cynthia; Gordon, Sharon M.

    2013-01-01

    Background The authors evaluated the efficacy and tolerability of 10 percent and 20 percent benzocaine gels compared with those of a vehicle (placebo) gel for the temporary relief of toothache pain. They also assessed the compliance with the label dose administration directions on the part of participants with toothache pain. Methods Under double-masked conditions, 576 participants self-applied study gel to an open tooth cavity and surrounding oral tissues. Participants evaluated their pain intensity and pain relief for 120 minutes. The authors determined the amount of gel the participants applied. Results The responders’ rates (the primary efficacy parameter), defined as the percentage of participants who had an improvement in pain intensity as exhibited by a pain score reduction of at least one unit on the dental pain scale from baseline for two consecutive assessments any time between the five- and 20-minute points, were 87.3 percent, 80.7 percent and 70.4 percent, respectively, for 20 percent benzocaine gel, 10 percent benzocaine gel and vehicle gel. Both benzocaine gels were significantly (P ≤ .05) better than vehicle gel; the 20 percent benzocaine gel also was significantly (P ≤ .05) better than the 10 percent benzocaine gel. The mean amount of gel applied was 235.6 milligrams, with 88.2 percent of participants applying 400 mg or less. Conclusions Both 10 percent and 20 percent benzocaine gels were more efficacious than the vehicle gel, and the 20 percent benzocaine gel was more efficacious than the 10 percent benzocaine gel. All treatments were well tolerated by participants. Practical Implications Patients can use 10 percent and 20 percent benzocaine gels to temporarily treat toothache pain safely. PMID:23633700

  17. Matter power spectrum and the challenge of percent accuracy

    International Nuclear Information System (INIS)

    Schneider, Aurel; Teyssier, Romain; Potter, Doug; Stadel, Joachim; Reed, Darren S.; Onions, Julian; Pearce, Frazer R.; Smith, Robert E.; Springel, Volker; Scoccimarro, Roman

    2016-01-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N -body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N -body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisation techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k ≤1 h Mpc −1 and to within three percent at k ≤10 h Mpc −1 . We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k ≤ 2 h Mpc −1 . In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L =0.5 h −1 Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of M p =10 9 h −1 M ⊙ is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.

  18. Matter power spectrum and the challenge of percent accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Aurel; Teyssier, Romain; Potter, Doug; Stadel, Joachim; Reed, Darren S. [Institute for Computational Science, University of Zurich, Winterthurerstrasse 190, 8057 Zurich (Switzerland); Onions, Julian; Pearce, Frazer R. [School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD (United Kingdom); Smith, Robert E. [Department of Physics and Astronomy, University of Sussex, Brighton, BN1 9QH (United Kingdom); Springel, Volker [Heidelberger Institut für Theoretische Studien, 69118 Heidelberg (Germany); Scoccimarro, Roman, E-mail: aurel@physik.uzh.ch, E-mail: teyssier@physik.uzh.ch, E-mail: dpotter@physik.uzh.ch, E-mail: stadel@physik.uzh.ch, E-mail: julian.onions@nottingham.ac.uk, E-mail: reed@physik.uzh.ch, E-mail: r.e.smith@sussex.ac.uk, E-mail: volker.springel@h-its.org, E-mail: Frazer.Pearce@nottingham.ac.uk, E-mail: rs123@nyu.edu [Center for Cosmology and Particle Physics, Department of Physics, New York University, NY 10003, New York (United States)

    2016-04-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N -body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N -body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisation techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k ≤1 h Mpc{sup −1} and to within three percent at k ≤10 h Mpc{sup −1}. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k ≤ 2 h Mpc{sup −1}. In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L =0.5 h {sup −1}Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of M {sub p}=10{sup 9} h {sup −1}M{sub ⊙} is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.

  19. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  20. How I Love My 80 Percenters

    Science.gov (United States)

    Maturo, Anthony J.

    2002-01-01

    Don't ever take your support staff for granted. By support staff, I mean the people in personnel, logistics, and finance; the ones who can make things happen with a phone call or a signature, or by the same token frustrate you to no end by their inaction; these are people you must depend on. I've spent a lot of time thinking about how to cultivate relationships with my support staff that work to the advantage of both of us. The most important thing that have learned working with people, any people--and I will tell you how I learned this in a minute--is there are some folks you just can't motivate, so forget it, don't try; others you certainly can with a little psychology and some effort; and the best of the bunch, what I call the 80 percenters, you don't need to motivate because they're already on the team and performing beautifully. The ones you can't change are rocks. Face up to it, and just kick them out of your way. I have a reputation with the people who don't want to perform or be part of the team. They don't come near me. If someone's a rock, I pick up on it right away, and I will walk around him or her to find someone better. The ones who can be motivated I take time to nurture. I consider them my projects. A lot of times these wannabes are people who want to help but don't know how. Listen, you can work with them. Lots of people in organizations have the mindset that all that matters are the regulations. God forbid if you ever work outside those regulations. They've got one foot on that regulation and they're holding it tight like a baby holds a blanket. What you're looking for is that first sign that their minds are opening. Usually you hear it in their vocabulary. What used to sound like "We can't do that ... the regulations won't allow it ... we have never done this before," well, suddenly that changes to "We have options ... let's take a look at the options ... let me research this and get back to you." The 80 percenters you want to nurture too, but

  1. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  2. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  3. Appetite - decreased

    Science.gov (United States)

    Loss of appetite; Decreased appetite; Anorexia ... Any illness can reduce appetite. If the illness is treatable, the appetite should return when the condition is cured. Loss of appetite can cause weight ...

  4. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  5. Analysis association of milk fat and protein percent in quantitative ...

    African Journals Online (AJOL)

    Analysis association of milk fat and protein percent in quantitative trait locus ... African Journal of Biotechnology ... Protein and fat percent as content of milk are high-priority criteria for financial aims and selection of programs in dairy cattle.

  6. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  7. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  8. 26 CFR 301.6226(b)-1 - 5-percent group.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 18 2010-04-01 2010-04-01 false 5-percent group. 301.6226(b)-1 Section 301.6226... ADMINISTRATION PROCEDURE AND ADMINISTRATION Assessment In General § 301.6226(b)-1 5-percent group. (a) In general. All members of a 5-percent group shall join in filing any petition for judicial review. The...

  9. Characterization of the uranium--2 weight percent molybdenum alloy

    International Nuclear Information System (INIS)

    Hemperly, V.C.

    1976-01-01

    The uranium-2 wt percent molybdenum alloy was prepared, processed, and age hardened to meet a minimum 930-MPa yield strength (0.2 percent) with a minimum of 10 percent elongation. These mechanical properties were obtained with a carbon level up to 300 ppM in the alloy. The tensile-test ductility is lowered by the humidity of the laboratory atmosphere

  10. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  11. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  12. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  13. CRED Cumulative Map of Percent Scleractinian Coral Cover at Saipan

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  14. CRED Cumulative Map of Percent Scleractinian Coral Cover at Sarigan

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  15. CRED Cumulative Map of Percent Scleractinian Coral Cover at Tutuila

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  16. CRED Cumulative Map of Percent Scleractinian Coral Cover at Anatahan

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  17. CRED Cumulative Map of Percent Scleractinian Coral Cover at Alamagan

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  18. CRED Cumulative Map of Percent Scleractinian Coral Cover at Agrihan

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  19. CRED Cumulative Map of Percent Scleractinian Coral Cover at Asuncion

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  20. CRED Cumulative Map of Percent Scleractinian Coral Cover at Aguijan

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  1. CRED Cumulative Map of Percent Scleractinian Coral Cover at Pagan

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  2. 26 CFR 1.1348-2 - Computation of the fifty-percent maximum tax on earned income.

    Science.gov (United States)

    2010-04-01

    ... compensation for A's personal services rendered by him in his laundry business would be $12,000. The net... earned income. 1.1348-2 Section 1.1348-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE... account in determining the net profits of a trade or business in which both personal services and capital...

  3. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  4. The relationships between percent body fat and other ...

    African Journals Online (AJOL)

    The relationships between percent body fat and other anthropometric nutritional predictors among male and female children in Nigeria. ... A weak significant positive correlation was observed between the percent body fat and height – armspan ratio ... There was evidence of overweight and obesity in both children. The mid ...

  5. The Texas Ten Percent Plan's Impact on College Enrollment

    Science.gov (United States)

    Daugherty, Lindsay; Martorell, Paco; McFarlin, Isaac, Jr.

    2014-01-01

    The Texas Ten Percent Plan (TTP) provides students in the top 10 percent of their high-school class with automatic admission to any public university in the state, including the two flagship schools, the University of Texas at Austin and Texas A&M. Texas created the policy in 1997 after a federal appellate court ruled that the state's previous…

  6. Total energy consumption in Finland increased by one percent

    International Nuclear Information System (INIS)

    Timonen, L.

    2000-01-01

    The total energy consumption in Finland increased by less than a percent in 1999. The total energy consumption in 1999 was 1310 PJ corresponding to about 31 million toe. The electric power consumption increased moderately by 1.6%, which is less than the growth of the gross national product (3.5%). The final consumption of energy grew even less, only by 0.5%. Import of electric power increased by 19% in 1999. The import of electric power was due to the availability of low-priced electric power on the Nordic electricity markets. Nuclear power generation increased by 5% and the consumption of wood-based fuels by 3%. The increment of the nuclear power generation increased because of the increased output capacity and good operability of the power plants. Wind power production doubles, but the share of it in the total energy consumption is only about 0.01%. The peat consumption decreased by 12% and the consumption of hydroelectric power by 15%. The decrease in production of hydroelectric power was compensated by an increase import of electric power. The consumption of fossil fuels, coal, oil and natural gas remained nearly the same as in 1998. The gasoline consumption, however, decreased, but the consumption of diesel oil increased due to the increased road transport. The share of the fossil fuels was nearly half of the total energy consumption. The consumption of renewable energy sources remained nearly the same, in 23% if the share of peat is excluded, and in 30% if the share of peat is included. Wood-based fuels are the most significant type of renewable fuels. The share of them in 1999 was over 80% of the total usage of the renewable energy sources. The carbon dioxide emissions in Finland decreased in 1999 by 1.0 million tons. The total carbon dioxide emissions were 56 million tons. The decrease was mainly due to the decrease of the peat consumption. The final consumption of energy increased by 0.5%, being hence about 1019 PJ. Industry is the main consumer of energy

  7. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  8. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  9. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  10. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  11. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  12. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  13. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  14. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  15. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  16. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  17. National Land Cover Database (NLCD) Percent Tree Canopy Collection

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The National Land Cover Database (NLCD) Percent Tree Canopy Collection is a product of the U.S. Forest Service (USFS), and is produced through a cooperative project...

  18. National Land Cover Database (NLCD) Percent Developed Imperviousness Collection

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The National Land Cover Database (NLCD) Percent Developed Imperviousness Collection is produced through a cooperative project conducted by the Multi-Resolution Land...

  19. Determination of percent calcium carbonate in calcium chromate

    International Nuclear Information System (INIS)

    Middleton, H.W.

    1979-01-01

    The precision, accuracy and reliability of the macro-combustion method is superior to the Knorr alkalimetric method, and it is faster. It also significantly reduces the calcium chromate waste accrual problem. The macro-combustion method has been adopted as the official method for determination of percent calcium carbonate in thermal battery grade anhydrous calcium chromate and percent calcium carbonate in quicklime used in the production of calcium chromate. The apparatus and procedure can be used to measure the percent carbonate in inorganic materials other than calcium chromate. With simple modifications in the basic apparatus and procedure, the percent carbon and hydrogen can be measured in many organic material, including polymers and polymeric formulations. 5 figures, 5 tables

  20. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  1. Modeling percent tree canopy cover: a pilot study

    Science.gov (United States)

    John W. Coulston; Gretchen G. Moisen; Barry T. Wilson; Mark V. Finco; Warren B. Cohen; C. Kenneth Brewer

    2012-01-01

    Tree canopy cover is a fundamental component of the landscape, and the amount of cover influences fire behavior, air pollution mitigation, and carbon storage. As such, efforts to empirically model percent tree canopy cover across the United States are a critical area of research. The 2001 national-scale canopy cover modeling and mapping effort was completed in 2006,...

  2. Age-specific association between percent body fat and pulmonary ...

    African Journals Online (AJOL)

    This study describes the association between percent body fat and pulmonary function among apparently normal twenty male children tidal volume aged 4 years and twenty male children aged 10 years in Ogbomoso. The mean functional residual capacity of the lung in male children aged 10 years was significantly higher ...

  3. Analysis association of milk fat and protein percent in quantitative ...

    African Journals Online (AJOL)

    SAM

    2014-05-14

    May 14, 2014 ... African Journal of Biotechnology. Full Length ... quantitative trait locus (QTLs) on chromosomes 1, 6, 7 and 20 in ... Protein and fat percent as content of milk are high-priority criteria for financial aims and selection of programs ...

  4. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    Science.gov (United States)

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  5. School Designed To Use 80 Percent Less Energy

    Science.gov (United States)

    American School and University, 1975

    1975-01-01

    The new Terraset Elementary School in Reston, Virginia, uses earth as a cover for the roof area and for about 80 percent of the wall area. A heat recovery system will be used with solar collectors playing a primary role in heating and cooling. (Author/MLF)

  6. Serum Predictors of Percent Lean Mass in Young Adults.

    Science.gov (United States)

    Lustgarten, Michael S; Price, Lori L; Phillips, Edward M; Kirn, Dylan R; Mills, John; Fielding, Roger A

    2016-08-01

    Lustgarten, MS, Price, LL, Phillips, EM, Kirn, DR, Mills, J, and Fielding, RA. Serum predictors of percent lean mass in young adults. J Strength Cond Res 30(8): 2194-2201, 2016-Elevated lean (skeletal muscle) mass is associated with increased muscle strength and anaerobic exercise performance, whereas low levels of lean mass are associated with insulin resistance and sarcopenia. Therefore, studies aimed at obtaining an improved understanding of mechanisms related to the quantity of lean mass are of interest. Percent lean mass (total lean mass/body weight × 100) in 77 young subjects (18-35 years) was measured with dual-energy x-ray absorptiometry. Twenty analytes and 296 metabolites were evaluated with the use of the standard chemistry screen and mass spectrometry-based metabolomic profiling, respectively. Sex-adjusted multivariable linear regression was used to determine serum analytes and metabolites significantly (p ≤ 0.05 and q ≤ 0.30) associated with the percent lean mass. Two enzymes (alkaline phosphatase and serum glutamate oxaloacetate aminotransferase) and 29 metabolites were found to be significantly associated with the percent lean mass, including metabolites related to microbial metabolism, uremia, inflammation, oxidative stress, branched-chain amino acid metabolism, insulin sensitivity, glycerolipid metabolism, and xenobiotics. Use of sex-adjusted stepwise regression to obtain a final covariate predictor model identified the combination of 5 analytes and metabolites as overall predictors of the percent lean mass (model R = 82.5%). Collectively, these data suggest that a complex interplay of various metabolic processes underlies the maintenance of lean mass in young healthy adults.

  7. New formula for calculation of cobalt-60 percent depth dose

    International Nuclear Information System (INIS)

    Tahmasebi Birgani, M. J.; Ghorbani, M.

    2005-01-01

    On the basis of percent depth dose calculation, the application of - dosimetry in radiotherapy has an important role to play in reducing the chance of tumor recurrence. The aim of this study is to introduce a new formula for calculating the central axis percent depth doses of Cobalt-60 beam. Materials and Methods: In the present study, based on the British Journal of Radiology table, nine new formulas are developed and evaluated for depths of 0.5 - 30 cm and fields of (4*4) - (45*45) cm 2 . To evaluate the agreement between the formulas and the table, the average of the absolute differences between the values was used and the formula with the least average was selected as the best fitted formula. The Microsoft Excel 2000 and the Data fit 8.0 soft wares were used to perform the calculations. Results: The results of this study indicated that one amongst the nine formulas gave a better agreement with the percent depth doses listed in the table of British Journal of Radiology . The new formula has two parts in terms of log (A/P). The first part as a linear function with the depth in the range of 0.5 to 5 cm and the other one as a second order polynomial with the depth in the range of 6 to 30 cm. The average of - the differences between the tabulated and the calculated data using the formula (Δ) is equal to 0.3 152. Discussion and Conclusion: Therefore, the calculated percent depth dose data based on this formula has a better agreement with the published data for Cobalt-60 source. This formula could be used to calculate the percent depth dose for the depths and the field sizes not listed in the British Journal of Radiology table

  8. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  9. Matter power spectrum and the challenge of percent accuracy

    OpenAIRE

    Schneider, Aurel; Teyssier, Romain; Potter, Doug; Stadel, Joachim; Onions, Julian; Reed, Darren S.; Smith, Robert E.; Springel, Volker; Pearce, Frazer R.; Scoccimarro, Roman

    2015-01-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day $N$-body methods, identifying main potential error sources from the set-up of initial conditions to...

  10. Relationship between breast sound speed and mammographic percent density

    Science.gov (United States)

    Sak, Mark; Duric, Nebojsa; Boyd, Norman; Littrup, Peter; Myc, Lukasz; Faiz, Muhammad; Li, Cuiping; Bey-Knight, Lisa

    2011-03-01

    Despite some shortcomings, mammography is currently the standard of care for breast cancer screening and diagnosis. However, breast ultrasound tomography is a rapidly developing imaging modality that has the potential to overcome the drawbacks of mammography. It is known that women with high breast densities have a greater risk of developing breast cancer. Measuring breast density is accomplished through the use of mammographic percent density, defined as the ratio of fibroglandular to total breast area. Using an ultrasound tomography (UST) prototype, we created sound speed images of the patient's breast, motivated by the fact that sound speed in a tissue is proportional to the density of the tissue. The purpose of this work is to compare the acoustic performance of the UST system with the measurement of mammographic percent density. A cohort of 251 patients was studied using both imaging modalities and the results suggest that the volume averaged breast sound speed is significantly related to mammographic percent density. The Spearman correlation coefficient was found to be 0.73 for the 175 film mammograms and 0.69 for the 76 digital mammograms obtained. Since sound speed measurements do not require ionizing radiation or physical compression, they have the potential to form the basis of a safe, more accurate surrogate marker of breast density.

  11. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  12. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  13. The regulation of starch accumulation in Panicum maximum Jacq ...

    African Journals Online (AJOL)

    ... decrease the starch level. These observations are discussed in relation to the photosynthetic characteristics of P. maximum. Keywords: accumulation; botany; carbon assimilation; co2 fixation; growth conditions; mesophyll; metabolites; nitrogen; nitrogen levels; nitrogen supply; panicum maximum; plant physiology; starch; ...

  14. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  15. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  16. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  17. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  18. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  19. One Percent Determination of the Primordial Deuterium Abundance

    Science.gov (United States)

    Cooke, Ryan J.; Pettini, Max; Steidel, Charles C.

    2018-03-01

    We report a reanalysis of a near-pristine absorption system, located at a redshift {z}abs}=2.52564 toward the quasar Q1243+307, based on the combination of archival and new data obtained with the HIRES echelle spectrograph on the Keck telescope. This absorption system, which has an oxygen abundance [O/H] = ‑2.769 ± 0.028 (≃1/600 of the solar abundance), is among the lowest metallicity systems currently known where a precise measurement of the deuterium abundance is afforded. Our detailed analysis of this system concludes, on the basis of eight D I absorption lines, that the deuterium abundance of this gas cloud is {log}}10({{D}}/{{H}})=-4.622+/- 0.015, which is in very good agreement with the results previously reported by Kirkman et al., but with an improvement on the precision of this single measurement by a factor of ∼3.5. Combining this new estimate with our previous sample of six high precision and homogeneously analyzed D/H measurements, we deduce that the primordial deuterium abundance is {log}}10{({{D}}/{{H}})}{{P}}=-4.5974+/- 0.0052 or, expressed as a linear quantity, {10}5{({{D}}/{{H}})}{{P}}=2.527+/- 0.030; this value corresponds to a one percent determination of the primordial deuterium abundance. Combining our result with a big bang nucleosynthesis (BBN) calculation that uses the latest nuclear physics input, we find that the baryon density derived from BBN agrees to within 2σ of the latest results from the Planck cosmic microwave background data. Based on observations collected at the W.M. Keck Observatory which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation.

  20. 25 CFR 141.36 - Maximum finance charges on pawn transactions.

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Maximum finance charges on pawn transactions. 141.36... PRACTICES ON THE NAVAJO, HOPI AND ZUNI RESERVATIONS Pawnbroker Practices § 141.36 Maximum finance charges on pawn transactions. No pawnbroker may impose an annual finance charge greater than twenty-four percent...

  1. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  2. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  3. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  4. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  5. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  6. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  7. System for memorizing maximum values

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1992-08-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  8. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  9. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  10. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  11. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  12. The NASA Plan: To award eight percent of prime and subcontracts to socially and economically disadvantaged businesses

    Science.gov (United States)

    1990-01-01

    It is NASA's intent to provide small disadvantaged businesses, including women-owned, historically black colleges and universities and minority education institutions the maximum practicable opportunity to receive a fair proportion of NASA prime and subcontracted awards. Annually, NASA will establish socioeconomic procurement goals including small disadvantaged business goals, with a target of reaching the eight percent level by the end of FY 1994. The NASA Associate Administrators, who are responsible for the programs at the various NASA Centers, will be held accountable for full implementation of the socioeconomic procurement plans. Various aspects of this plan, including its history, are discussed.

  13. Efficacy of 2.5 percent and 1.25 percent Povidone-Iodine Solution for Prophylaxis of Ophthalmia Neonatorum

    International Nuclear Information System (INIS)

    Khan, F. A.; Hussain, M. A.; Niazi, S. P. K.; Haq, Z. U.; Akhtar, N.

    2016-01-01

    Objective: To determine the efficacy of 2.5 percentage and 1.25 percentage Povidone-Iodine solution for Ophthalmia neonatorum prophylaxis. Study Design: Interventional study. Place and Duration of Study: Eye Department, Combined Military Hospital, Sargodha, from May to November 2014. Methodology: A total of 200 eyes of 100 newborn babies were enrolled and divided into two groups of 100 right eyes and 100 left eyes. A conjunctival swab for bacterial culture was taken within 30 minutes after delivery. A single drop of 2.5 percentage Povidone-Iodine was then placed in the right eye while in the left eye a single drop of 1.25 percentage Povidone-Iodine was placed. Thirty minutes after placing Povidone-Iodine, a conjunctival swab was again taken. A bacterial suspension was prepared from each swab in determining bacterial counts. The bacterial suspension was inoculated on yeast extract agar and the number of colony forming units were counted. At each culture, the number of colony forming units before and after instillation of 2.5 percentage Povidone-Iodine and 1.25 percentage Povidone-Iodine were compared. Wilcoxon's signed rank test was used for statistical analysis. Results: The 2.5 percentage Povidone-Iodine solution caused a statistically significant decrease in the number of colony forming units (p=0.001). Similarly, the 1.25 percentage Povidone-Iodine solution also reduced the number of colony forming units to a statistically significant level (p=0.001). Conclusion: The 1.25 percentage concentration of Povidone-Iodine is as effective as the 2.5 percentage concentration of Povidone-Iodine in reducing the number of colony forming units in healthy conjunctivae of newborns. (author)

  14. Does Asset Allocation Policy Explain 40, 90, 100 Percent of Performance?

    OpenAIRE

    Roger G. Ibbotson; Paul D. Kaplan

    2001-01-01

    Does asset allocation policy explain 40 percent, 90 percent, or 100 percent of performance? According to some well-known studies, more than 90 percent of the variability of a typical plan sponsor's performance over time is attributable to asset allocation. However, few people want to explain variability over time. Instead, an analyst might want to know how important it is in explaining the differences in return from one fund to another, or what percentage of the level of a typical fund's retu...

  15. 49 CFR 173.182 - Barium azide-50 percent or more water wet.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Barium azide-50 percent or more water wet. 173.182 Section 173.182 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS... Class 1 and Class 7 § 173.182 Barium azide—50 percent or more water wet. Barium azide—50 percent or more...

  16. Decreasing Relative Risk Premium

    DEFF Research Database (Denmark)

    Hansen, Frank

    relative risk premium in the small implies decreasing relative risk premium in the large, and decreasing relative risk premium everywhere implies risk aversion. We finally show that preferences with decreasing relative risk premium may be equivalently expressed in terms of certain preferences on risky......We consider the risk premium demanded by a decision maker with wealth x in order to be indifferent between obtaining a new level of wealth y1 with certainty, or to participate in a lottery which either results in unchanged present wealth or a level of wealth y2 > y1. We define the relative risk...... premium as the quotient between the risk premium and the increase in wealth y1–x which the decision maker puts on the line by choosing the lottery in place of receiving y1 with certainty. We study preferences such that the relative risk premium is a decreasing function of present wealth, and we determine...

  17. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  18. Decreasing Serial Cost Sharing

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Østerdal, Lars Peter

    The increasing serial cost sharing rule of Moulin and Shenker [Econometrica 60 (1992) 1009] and the decreasing serial rule of de Frutos [Journal of Economic Theory 79 (1998) 245] have attracted attention due to their intuitive appeal and striking incentive properties. An axiomatic characterization...... of the increasing serial rule was provided by Moulin and Shenker [Journal of Economic Theory 64 (1994) 178]. This paper gives an axiomatic characterization of the decreasing serial rule...

  19. Decreasing serial cost sharing

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Østerdal, Lars Peter Raahave

    2009-01-01

    The increasing serial cost sharing rule of Moulin and Shenker (Econometrica 60:1009-1037, 1992) and the decreasing serial rule of de Frutos (J Econ Theory 79:245-275, 1998) are known by their intuitive appeal and striking incentive properties. An axiomatic characterization of the increasing serial...... rule was provided by Moulin and Shenker (J Econ Theory 64:178-201, 1994). This paper gives an axiomatic characterization of the decreasing serial rule....

  20. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  1. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  2. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  3. Decreasing relative risk premium

    DEFF Research Database (Denmark)

    Hansen, Frank

    2007-01-01

    such that the corresponding relative risk premium is a decreasing function of present wealth, and we determine the set of associated utility functions. We find a new characterization of risk vulnerability and determine a large set of utility functions, closed under summation and composition, which are both risk vulnerable...

  4. Decreasing asthma morbidity

    African Journals Online (AJOL)

    1994-12-12

    Dec 12, 1994 ... Apart from the optimal use of drugs, various supplementary methods have been tested to decrease asthma morbidity, usually in patients from reiatively affluent socio-economic backgrounds. A study of additional measures taken in a group of moderate to severe adult asthmatics from very poor socio- ...

  5. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  6. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  7. CRED Cumulative Map of Percent Scleractinian Coral Cover at Kauai, 2005

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  8. CRED Cumulative Map of Percent Scleractinian Coral Cover at Niihau, 2005

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  9. CRED Cumulative Map of Percent Scleractinian Coral Cover at Raita Bank, 2001

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  10. CRED Cumulative Map of Percent Scleractinian Coral Cover at Kure Atoll, 2002-2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  11. CRED Cumulative Map of Percent Scleractinian Coral Cover at Stingray Shoals

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  12. CRED Cumulative Map of Percent Scleractinian Coral Cover at Ofu & Olosega

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  13. CRED Cumulative Map of Percent Scleractinian Coral Cover at Laysan Island, 2002-2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  14. CRED Cumulative Map of Percent Scleractinian Coral Cover at Eleven-Mile Bank

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  15. CRED Cumulative Map of Percent Scleractinian Coral Cover at Palmyra Atoll, 2002-2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  16. CRED Cumulative Map of Percent Scleractinian Coral Cover at Lisianski Island, 2001-2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  17. CRED Cumulative Map of Percent Scleractinian Coral Cover at Ta'u

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  18. CRED Cumulative Map of Percent Scleractinian Coral Cover at Gardner Pinnacles, 2003

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  19. CRED Cumulative Map of Percent Scleractinian Coral Cover at Pearl and Hermes Atoll, 2002-2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  20. CRED Cumulative Map of Percent Scleractinian Coral Cover at Molokai, 2005

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  1. CRED Cumulative Map of Percent Scleractinian Coral Cover at Guam, 2003

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  2. CRED Cumulative Map of Percent Scleractinian Coral Cover at Baker Island, 2002-2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  3. CRED Cumulative Map of Percent Scleractinian Coral Cover at French Frigate Shoals

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  4. CRED Cumulative Map of Percent Scleractinian Coral Cover at Maro Reef, 2001-2004

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry.

  5. Detection of viability by percent thallium uptake with conventional thallium scintigraphy

    International Nuclear Information System (INIS)

    Imai, Kamon; Araki, Yasushi; Horiuchi, Kou-ichi; Yumikura, Sei; Saito, Satoshi; Ozawa, Yukio; Kan-matsuse, Katsuo; Hagiwara, Kazuo.

    1994-01-01

    Thallium myocardial scintigraphy (TMS) is used for diagnosis of viability in infarcted myocardium before coronary revascularization. Underestimation of viability by TMS has been reported by many investigators. To evaluate viability precisely, thallium re-injection method or 24 hour delayed imaging is performed. However, these techniques are not convenient and are difficult to perform in clinical practice. Percent T1-uptake method was developed for predicting myocardial viability. To evaluate usefulness of this method, TMS was performed before and after PTCA in 23 patients with myocardial infarction. Left ventricle was divided into 3 layers, then each layer was divided into 4 segments (12 segments in total). Forth three segments showed recovery of perfusion on TMS after PTCA. Viability in infarcted myocardium is predicted by 1) redistribution (RD), 2) %T1-uptake≥45% on the image immediately after exercise (TE), and 3) %T1-uptake≥45% on delayed image (TD). Sensitivity was RD: 60%, TE: 90% and TD: 95% (p<0.001 vs. RD). Specificity was RD: 74%, TE: 68%, and TD: 60% (NS). Predictive accuracy (PA) was RD: 69%, TE: 77%, TD: 73% (NS). Compared with RD, %T1-uptake, either TE or TD, increased sensitivity with slightly improved PA, but decreased specificity slightly. Therefore %T1-uptake would be a sensitive and useful predictor to find patients who are most likely to benefit from re-vascularization. (author)

  6. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  7. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  8. 13 CFR 107.1160 - Maximum amount of Leverage for a Section 301(d) Licensee.

    Science.gov (United States)

    2010-01-01

    ... Leverage, you must maintain Venture Capital Financings (at cost) that equal at least 30 percent of your... maintain at least the same dollar amount of Venture Capital Financings (at cost). (e) Definition of “Total... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum amount of Leverage for a...

  9. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Science.gov (United States)

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  10. Protein phosphatases decrease their activity during capacitation: a new requirement for this event.

    Directory of Open Access Journals (Sweden)

    Janetti R Signorelli

    Full Text Available There are few reports on the role of protein phosphatases during capacitation. Here, we report on the role of PP2B, PP1, and PP2A during human sperm capacitation. Motile sperm were resuspended in non-capacitating medium (NCM, Tyrode's medium, albumin- and bicarbonate-free or in reconstituted medium (RCM, NCM plus 2.6% albumin/25 mM bicarbonate. The presence of the phosphatases was evaluated by western blotting and the subcellular localization by indirect immunofluorescence. The function of these phosphatases was analyzed by incubating the sperm with specific inhibitors: okadaic acid, I2, endothall, and deltamethrin. Different aliquots were incubated in the following media: 1 NCM; 2 NCM plus inhibitors; 3 RCM; and 4 RCM plus inhibitors. The percent capacitated sperm and phosphatase activities were evaluated using the chlortetracycline assay and a phosphatase assay kit, respectively. The results confirm the presence of PP2B and PP1 in human sperm. We also report the presence of PP2A, specifically, the catalytic subunit and the regulatory subunits PR65 and B. PP2B and PP2A were present in the tail, neck, and postacrosomal region, and PP1 was present in the postacrosomal region, neck, middle, and principal piece of human sperm. Treatment with phosphatase inhibitors rapidly (≤1 min increased the percent of sperm depicting the pattern B, reaching a maximum of ∼40% that was maintained throughout incubation; after 3 h, the percent of capacitated sperm was similar to that of the control. The enzymatic activity of the phosphatases decreased during capacitation without changes in their expression. The pattern of phosphorylation on threonine residues showed a sharp increase upon treatment with the inhibitors. In conclusion, human sperm express PP1, PP2B, and PP2A, and the activity of these phosphatases decreases during capacitation. This decline in phosphatase activities and the subsequent increase in threonine phosphorylation may be an important

  11. 46 CFR 42.20-7 - Flooding standard: Type “B” vessel, 60 percent reduction.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Flooding standard: Type âBâ vessel, 60 percent reduction... DOMESTIC AND FOREIGN VOYAGES BY SEA Freeboards § 42.20-7 Flooding standard: Type “B” vessel, 60 percent... applied to the following flooding standards: (1) If the vessel is 225 meters (738 feet) or less in length...

  12. Basal area or stocking percent: which works best in controlling density in natural shortleaf pine stands

    Science.gov (United States)

    Ivan L. Sander

    1986-01-01

    Results from a shortleaf pine thinning study in Missouri show that continually thinning a stand to the same basal area will eventually create an understocked stand and reduce yields. Using stocking percent to control thinning intensity allows basal area to increase as stands get older. The best yield should occur when shortleaf pine is repeatedly thinned to 60 percent...

  13. 12 CFR 741.4 - Insurance premium and one percent deposit.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Insurance premium and one percent deposit. 741... Insurance premium and one percent deposit. (a) Scope. This section implements the requirements of Section... payment of an insurance premium. (b) Definitions. For purposes of this section: (1) Available assets ratio...

  14. Hyperglycemia of Diabetic Rats Decreased by a Glucagon Receptor Antagonist

    Science.gov (United States)

    Johnson, David G.; Ulichny Goebel, Camy; Hruby, Victor J.; Bregman, Marvin D.; Trivedi, Dev

    1982-02-01

    The glucagon analog [l-Nα-trinitrophenylhistidine, 12-homoarginine]-glucagon (THG) was examined for its ability to lower blood glucose concentrations in rats made diabetic with streptozotocin. In vitro, THG is a potent antagonist of glucagon activation of the hepatic adenylate cyclase assay system. Intravenous bolus injections of THG caused rapid decreases (20 to 35 percent) of short duration in blood glucose. Continuous infusion of low concentrations of the inhibitor led to larger sustained decreases in blood glucose (30 to 65 percent). These studies demonstrate that a glucagon receptor antagonist can substantially reduce blood glucose levels in diabetic animals without addition of exogenous insulin.

  15. Thermal effects in equilibrium surface segregation in a copper/10-atomic-percent-aluminum alloy using Auger electron spectroscopy

    Science.gov (United States)

    Ferrante, J.

    1972-01-01

    Equilibrium surface segregation of aluminum in a copper-10-atomic-percent-aluminum single crystal alloy oriented in the /111/ direction was demonstrated by using Auger electron spectroscopy. This crystal was in the solid solution range of composition. Equilibrium surface segregation was verified by observing that the aluminum surface concentration varied reversibly with temperature in the range 550 to 850 K. These results were curve fitted to an expression for equilibrium grain boundary segregation and gave a retrieval energy of 5780 J/mole (1380 cal/mole) and a maximum frozen-in surface coverage three times the bulk layer concentration. Analyses concerning the relative merits of sputtering calibration and the effects of evaporation are also included.

  16. Use of biofuels in road transport decreases; Verbruik biobrandstoffen in wegverkeer daalt

    Energy Technology Data Exchange (ETDEWEB)

    Segers, R.

    2011-04-27

    The use of biofuels decreased from 3.5 percent, for all gasoline and diesel used by road transport in 2009, to 2 percent in 2010. Particularly the use of biodiesel decreased, dropping from 3.5 to 1.5 percent. The use of biogasoline remained stable, catering for 3 percent of all gasoline use. [Dutch] Het verbruik van biobrandstoffen daalde van 3,5 procent, van alle benzine en diesel voor het wegverkeer in 2009, naar 2 procent in 2010. Vooral het verbruik van biodiesel daalde: van 3,5 procent naar 1,5 procent. Het verbruik van biobenzine bleef, met 3 procent van alle benzine, gelijk.

  17. Completion of the first approach to critical for the seven percent critical experiment

    International Nuclear Information System (INIS)

    Barber, A. D.; Harms, G. A.

    2009-01-01

    The first approach-to-critical experiment in the Seven Percent Critical Experiment series was recently completed at Sandia. This experiment is part of the Seven Percent Critical Experiment which will provide new critical and reactor physics benchmarks for fuel enrichments greater than five weight percent. The inverse multiplication method was used to determine the state of the system during the course of the experiment. Using the inverse multiplication method, it was determined that the critical experiment went slightly supercritical with 1148 fuel elements in the fuel array. The experiment is described and the results of the experiment are presented. (authors)

  18. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  19. Maximum total organic carbon limit for DWPF melter feed

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    DWPF recently decided to control the potential flammability of melter off-gas by limiting the total carbon content in the melter feed and maintaining adequate conditions for combustion in the melter plenum. With this new strategy, all the LFL analyzers and associated interlocks and alarms were removed from both the primary and backup melter off-gas systems. Subsequently, D. Iverson of DWPF- T ampersand E requested that SRTC determine the maximum allowable total organic carbon (TOC) content in the melter feed which can be implemented as part of the Process Requirements for melter feed preparation (PR-S04). The maximum TOC limit thus determined in this study was about 24,000 ppm on an aqueous slurry basis. At the TOC levels below this, the peak concentration of combustible components in the quenched off-gas will not exceed 60 percent of the LFL during off-gas surges of magnitudes up to three times nominal, provided that the melter plenum temperature and the air purge rate to the BUFC are monitored and controlled above 650 degrees C and 220 lb/hr, respectively. Appropriate interlocks should discontinue the feeding when one or both of these conditions are not met. Both the magnitude and duration of an off-gas surge have a major impact on the maximum TOC limit, since they directly affect the melter plenum temperature and combustion. Although the data obtained during recent DWPF melter startup tests showed that the peak magnitude of a surge can be greater than three times nominal, the observed duration was considerably shorter, on the order of several seconds. The long surge duration assumed in this study has a greater impact on the plenum temperature than the peak magnitude, thus making the maximum TOC estimate conservative. Two models were used to make the necessary calculations to determine the TOC limit

  20. EnviroAtlas - Minneapolis/St. Paul, MN - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  1. EnviroAtlas - Green Bay, WI - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  2. EnviroAtlas - Cleveland, OH - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  3. EnviroAtlas - Austin, TX - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  4. EnviroAtlas - New York, NY - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  5. EnviroAtlas - New Bedford, MA - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  6. EnviroAtlas - Portland, ME - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  7. EnviroAtlas - Woodbine, IA - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  8. EnviroAtlas - Milwaukee, WI - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  9. EnviroAtlas - Tampa, FL - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  10. EnviroAtlas - Pittsburgh, PA - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  11. EnviroAtlas - Phoenix, AZ - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  12. EnviroAtlas - Durham, NC - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  13. EnviroAtlas - Portland, OR - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  14. EnviroAtlas - Paterson, NJ - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  15. EnviroAtlas - Memphis, TN - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  16. EnviroAtlas - Fresno, CA - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  17. EnviroAtlas - Percent Urban Land Cover by 12-Digit HUC for the Conterminous United States

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates the percent urban land for each 12-digit hydrologic unit code (HUC) in the conterminous United States. For the purposes of this...

  18. EnviroAtlas - Des Moines, IA - Estimated Percent Tree Cover Along Walkable Roads

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  19. EnviroAtlas Estimated Percent Tree Cover Along Walkable Roads Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter...

  20. 64 Percent of Asian and Pacific Islander Treatment Admissions Name Alcohol as Their Problem

    Science.gov (United States)

    Data Spotlight May 28, 2013 64 Percent of Asian and Pacific Islander Treatment Admissions Name Alcohol as ... common problem in the United States. 1 When Asians and Pacific Islanders (APIs) go to treatment, alcohol ...

  1. Map of percent scleractinian coral cover and sand along camera tow tracks in west Hawaii

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral and sand overlaid on bathymetry and landsat imagery northwest...

  2. WHK Student Internship Enrollment, Mentor Participation Up More than 50 Percent | Poster

    Science.gov (United States)

    By Nancy Parrish, Staff Writer The Werner H. Kirsten Student Internship Program (WHK SIP) has enrolled the largest class ever for the 2013–2014 academic year, with 66 students and 50 mentors. This enrollment reflects a 53 percent increase in students and a 56 percent increase in mentors, compared to 2012–2013 (43 students and 32 mentors), according to Julie Hartman, WHK SIP

  3. Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation

    OpenAIRE

    Rajiv D. Banker

    1993-01-01

    This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...

  4. United States home births increase 20 percent from 2004 to 2008.

    Science.gov (United States)

    MacDorman, Marian F; Declercq, Eugene; Mathews, T J

    2011-09-01

    After a gradual decline from 1990 to 2004, the percentage of births occurring at home increased from 2004 to 2008 in the United States. The objective of this report was to examine the recent increase in home births and the factors associated with this increase from 2004 to 2008. United States birth certificate data on home births were analyzed by maternal demographic and medical characteristics. In 2008, there were 28,357 home births in the United States. From 2004 to 2008, the percentage of births occurring at home increased by 20 percent from 0.56 percent to 0.67 percent of United States births. This rise was largely driven by a 28 percent increase in the percentage of home births for non-Hispanic white women, for whom more than 1 percent of births occur at home. At the same time, the risk profile for home births has been lowered, with substantial drops in the percentage of home births of infants who are born preterm or at low birthweight, and declines in the percentage of home births that occur to teen and unmarried mothers. Twenty-seven states had statistically significant increases in the percentage of home births from 2004 to 2008; only four states had declines. The 20 percent increase in United States home births from 2004 to 2008 is a notable development that will be of interest to practitioners and policymakers. (BIRTH 38:3 September 2011). © 2011, Copyright the Authors. Journal compilation © 2011, Wiley Periodicals, Inc.

  5. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  6. A comparison of methods of determining the 100 percent survival of preserved red cells

    International Nuclear Information System (INIS)

    Valeri, C.R.; Pivacek, L.E.; Ouellet, R.; Gray, A.

    1984-01-01

    Studies were done to compare three methods to determine the 100 percent survival value from which to estimate the 24-hour posttransfusion survival of preserved red cells. The following methods using small aliquots of 51 Cr-labeled autologous preserved red cells were evaluated: First, the 125 I-albumin method, which is an indirect measurement of the recipient's red cell volume derived from the plasma volume measured using 125 I-labeled albumin and the total body hematocrit. Second, the body surface area method (BSA) in which the recipient's red cell volume is derived from a body surface area nomogram. Third, an extrapolation method, which extrapolates to zero time the radioactivity associated with the red cells in the recipient's circulation from 10 to 20 or 15 to 30 minutes after transfusion. The three methods gave similar results in all studies in which less than 20 percent of the transfused red cells were nonviable (24-hour posttransfusion survival values of between 80-100%), but not when more than 20 percent of the red cells were nonviable. When 21 to 35 percent of the transfused red cells were nonviable (24-hour posttransfusion survivals of 65 to 79%), values with the 125 I-albumin method and the body surface area method were about 5 percent lower (p less than 0.001) than values with the extrapolation method. When greater than 35 percent of the red cells were nonviable (24-hour posttransfusion survival values of less than 65%), values with the 125 I-albumin method and the body surface area method were about 10 percent lower (p less than 0.001) than those obtained by the extrapolation method

  7. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  8. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  9. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  10. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  11. Thirty Percent of Female Footballers Terminate Their Careers Due to Injury - A Retrospective Study Among Former Polish Players.

    Science.gov (United States)

    Grygorowicz, Monika; Michałowska, Martyna; Jurga, Paulina; Piontek, Tomasz; Jakubowska, Honorata; Kotwicki, Tomasz

    2017-09-27

    Female football is becoming an increasingly popular women's team sports discipline around the world. The Women's Football Committee in Polish Football Association (WFC_PFA) has developed a long-term strategic plan to popularize the discipline across the country and enhance girls' participation. On one hand, it is postulated to increase the number of female footballers, and on the other hand it is crucial to decrease the number of girls quitting football prematurely. To find the reasons for sports career termination among female football players. cross-sectional with retrospective information about reasons of career termination. On-line questionnaire was filled out by on-line access. Ninety-three former female footballers. factors leading to career termination. Participants completed the on-line questionnaire. The analysis was performed referring to two groups: "injury group" - in which the injury was the main reason for quitting football, and "other group" - in which the female player stopped playing football due to all other factors. Thirty percent of former Polish female football players terminated their career due to a long-term treatment for an injury. Over 27 percent (27.7%) females had ended their careers because they were not able to reconcile sports with work/studying. Over 10 percent (10.8%) of former football players reported that becoming a wife and/or mother was the reason for career termination. Losing motivation and interest in sport was reported by 9.2%(n=6) of present study participants who decided to terminate the career due to non-injury reasons. The results clearly show that more effort is needed to support female football players, especially after an injury, so that they do not quit the sport voluntarily.

  12. Spatio-temporal observations of the tertiary ozone maximum

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2009-07-01

    Full Text Available We present spatio-temporal distributions of the tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at an altitude of ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time to obtain spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.

    The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of the tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.

    Since ozone in the mesosphere is very sensitive to HOx concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HOx enhancement from the increased ionization.

  13. MFT homogeneity study at TNX: Final report on the low weight percent solids concentration

    International Nuclear Information System (INIS)

    Jenkins, W.J.

    1993-01-01

    A statistical design and analysis of both elemental analyses and weight percent solids analyses data was utilized to evaluate the MFT homogeneity at low heel levels and low agitator speed at both high and low solids feed concentrations. The homogeneity was also evaluated at both low and high agitator speed at the 6000+ gallons static level. The dynamic level portion of the test simulated feeding the Melter from the MFT to evaluate the uniformity of the solids slurry composition (Frit-PHA-Sludge) entering the melter from the MFT. This final report provides the results and conclusions from the second half of the study, the low weight percent solids concentration portion, as well as a comparison with the results from the first half of the study, the high weight percent solids portion

  14. Urban percent impervious surface and its relationship with land surface temperature in Yantai City, China

    International Nuclear Information System (INIS)

    Yu, Xinyang; Lu, Changhe

    2014-01-01

    This study investigated percent impervious surface area (PISA) extracted by a four-endmember normalized spectral mixture analysis (NSMA) method and evaluated the reliability of PISA as an indicator of land surface temperature (LST). Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM+) images for Yantai city, eastern China obtained from USGS were used as the main data source. The results demonstrated that four-endmember NSMA method performed better than the typical three-endmember one, and there was a strong linear relationship between LST and PISA for the two images, which suggest percent impervious surface area provides an alternative parameter for analyzing LST quantitatively in urban areas

  15. Cumulative percent energy deposition of photon beam incident on different targets, simulated by Monte Carlo

    International Nuclear Information System (INIS)

    Kandic, A.; Jevremovic, T.; Boreli, F.

    1989-01-01

    Monte Carlo simulation (without secondary radiation) of the standard photon interactions (Compton scattering, photoelectric absorption and pair protection) for the complex slab's geometry is used in numerical code ACCA. A typical ACCA run will yield: (a) transmission of primary photon radiation differential in energy, (b) the spectrum of energy deposited in the target as a function of position and (c) the cumulative percent energy deposition as a function of position. A cumulative percent energy deposition of photon monoenergetic beam incident on simplest and complexity tissue slab and Fe slab are presented in this paper. (author). 5 refs.; 2 figs

  16. Maximum Mass of Hybrid Stars in the Quark Bag Model

    Science.gov (United States)

    Alaverdyan, G. B.; Vartanyan, Yu. L.

    2017-12-01

    The effect of model parameters in the equation of state for quark matter on the magnitude of the maximum mass of hybrid stars is examined. Quark matter is described in terms of the extended MIT bag model including corrections for one-gluon exchange. For nucleon matter in the range of densities corresponding to the phase transition, a relativistic equation of state is used that is calculated with two-particle correlations taken into account based on using the Bonn meson-exchange potential. The Maxwell construction is used to calculate the characteristics of the first order phase transition and it is shown that for a fixed value of the strong interaction constant αs, the baryon concentrations of the coexisting phases grow monotonically as the bag constant B increases. It is shown that for a fixed value of the strong interaction constant αs, the maximum mass of a hybrid star increases as the bag constant B decreases. For a given value of the bag parameter B, the maximum mass rises as the strong interaction constant αs increases. It is shown that the configurations of hybrid stars with maximum masses equal to or exceeding the mass of the currently known most massive pulsar are possible for values of the strong interaction constant αs > 0.6 and sufficiently low values of the bag constant.

  17. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  18. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  19. Shower maximum detector for SDC calorimetry

    International Nuclear Information System (INIS)

    Ernwein, J.

    1994-01-01

    A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

  20. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  1. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  2. Cheatgrass percent cover change: Comparing recent estimates to climate change − Driven predictions in the Northern Great Basin

    Science.gov (United States)

    Boyte, Stephen P.; Wylie, Bruce K.; Major, Donald J.

    2016-01-01

    Cheatgrass (Bromus tectorum L.) is a highly invasive species in the Northern Great Basin that helps decrease fire return intervals. Fire fragments the shrub steppe and reduces its capacity to provide forage for livestock and wildlife and habitat critical to sagebrush obligates. Of particular interest is the greater sage grouse (Centrocercus urophasianus), an obligate whose populations have declined so severely due, in part, to increases in cheatgrass and fires that it was considered for inclusion as an endangered species. Remote sensing technologies and satellite archives help scientists monitor terrestrial vegetation globally, including cheatgrass in the Northern Great Basin. Along with geospatial analysis and advanced spatial modeling, these data and technologies can identify areas susceptible to increased cheatgrass cover and compare these with greater sage grouse priority areas for conservation (PAC). Future climate models forecast a warmer and wetter climate for the Northern Great Basin, which likely will force changing cheatgrass dynamics. Therefore, we examine potential climate-caused changes to cheatgrass. Our results indicate that future cheatgrass percent cover will remain stable over more than 80% of the study area when compared with recent estimates, and higher overall cheatgrass cover will occur with slightly more spatial variability. The land area projected to increase or decrease in cheatgrass cover equals 18% and 1%, respectively, making an increase in fire disturbances in greater sage grouse habitat likely. Relative susceptibility measures, created by integrating cheatgrass percent cover and temporal standard deviation datasets, show that potential increases in future cheatgrass cover match future projections. This discovery indicates that some greater sage grouse PACs for conservation could be at heightened risk of fire disturbance. Multiple factors will affect future cheatgrass cover including changes in precipitation timing and totals and

  3. The Determination of the Percent of Oxygen in Air Using a Gas Pressure Sensor

    Science.gov (United States)

    Gordon, James; Chancey, Katherine

    2005-01-01

    The experiment of determination of the percent of oxygen in air is performed in a general chemistry laboratory in which students compare the results calculated from the pressure measurements obtained with the calculator-based systems to those obtained in a water-measurement method. This experiment allows students to explore a fundamental reaction…

  4. Field method to measure changes in percent body fat of young women: The TIGER Study

    Science.gov (United States)

    Body mass index (BMI), waist (W) and hip (H) circumference (C) are commonly used to assess changes in body composition for field research. We developed a model to estimate changes in dual energy X-ray absorption (DXA) percent fat (% fat) from these variables with a diverse sample of young women fro...

  5. Generalized equations for estimating DXA percent fat of diverse young women and men: The Tiger Study

    Science.gov (United States)

    Popular generalized equations for estimating percent body fat (BF%) developed with cross-sectional data are biased when applied to racially/ethnically diverse populations. We developed accurate anthropometric models to estimate dual-energy x-ray absorptiometry BF% (DXA-BF%) that can be generalized t...

  6. European Community Can Reduce CO2 Emissions by Sixty Percent : A Feasibility Study

    NARCIS (Netherlands)

    Mot, E.; Bartelds, H.; Esser, P.M.; Huurdeman, A.J.M.; Laak, P.J.A. van de; Michon, S.G.L.; Nielen, R.J.; Baar, H.J.W. de

    1993-01-01

    Carbon dioxide (CO2) emissions in the European Community (EC) can be reduced by roughly 60 percent. A great many measures need to be taken to reach this reduction, with a total annual cost of ECU 55 milliard. Fossil fuel use is the main cause of CO2 emissions into the atmosphere; CO2 emissions are

  7. 48 CFR 836.606-73 - Application of 6 percent architect-engineer fee limitation.

    Science.gov (United States)

    2010-10-01

    ... architect-engineer fee limitation. 836.606-73 Section 836.606-73 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 836.606-73 Application of 6 percent architect-engineer fee limitation...

  8. 46 CFR 42.20-8 - Flooding standard: Type “B” vessel, 100 percent reduction.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Flooding standard: Type âBâ vessel, 100 percent... LINES DOMESTIC AND FOREIGN VOYAGES BY SEA Freeboards § 42.20-8 Flooding standard: Type “B” vessel, 100...-11 as applied to the following flooding standards: (1) If the vessel is 225 meters (738 feet) or less...

  9. Identification of a novel percent mammographic density locus at 12q24.

    Science.gov (United States)

    Stevens, Kristen N; Lindstrom, Sara; Scott, Christopher G; Thompson, Deborah; Sellers, Thomas A; Wang, Xianshu; Wang, Alice; Atkinson, Elizabeth; Rider, David N; Eckel-Passow, Jeanette E; Varghese, Jajini S; Audley, Tina; Brown, Judith; Leyland, Jean; Luben, Robert N; Warren, Ruth M L; Loos, Ruth J F; Wareham, Nicholas J; Li, Jingmei; Hall, Per; Liu, Jianjun; Eriksson, Louise; Czene, Kamila; Olson, Janet E; Pankratz, V Shane; Fredericksen, Zachary; Diasio, Robert B; Lee, Adam M; Heit, John A; DeAndrade, Mariza; Goode, Ellen L; Vierkant, Robert A; Cunningham, Julie M; Armasu, Sebastian M; Weinshilboum, Richard; Fridley, Brooke L; Batzler, Anthony; Ingle, James N; Boyd, Norman F; Paterson, Andrew D; Rommens, Johanna; Martin, Lisa J; Hopper, John L; Southey, Melissa C; Stone, Jennifer; Apicella, Carmel; Kraft, Peter; Hankinson, Susan E; Hazra, Aditi; Hunter, David J; Easton, Douglas F; Couch, Fergus J; Tamimi, Rulla M; Vachon, Celine M

    2012-07-15

    Percent mammographic density adjusted for age and body mass index (BMI) is one of the strongest risk factors for breast cancer and has a heritable component that remains largely unidentified. We performed a three-stage genome-wide association study (GWAS) of percent mammographic density to identify novel genetic loci associated with this trait. In stage 1, we combined three GWASs of percent density comprised of 1241 women from studies at the Mayo Clinic and identified the top 48 loci (99 single nucleotide polymorphisms). We attempted replication of these loci in 7018 women from seven additional studies (stage 2). The meta-analysis of stage 1 and 2 data identified a novel locus, rs1265507 on 12q24, associated with percent density, adjusting for age and BMI (P = 4.43 × 10(-8)). We refined the 12q24 locus with 459 additional variants (stage 3) in a combined analysis of all three stages (n = 10 377) and confirmed that rs1265507 has the strongest association in the 12q24 region (P = 1.03 × 10(-8)). Rs1265507 is located between the genes TBX5 and TBX3, which are members of the phylogenetically conserved T-box gene family and encode transcription factors involved in developmental regulation. Understanding the mechanism underlying this association will provide insight into the genetics of breast tissue composition.

  10. Radial growth and percent of latewood in Scots pine provenance trials in Western and Central Siberia

    Directory of Open Access Journals (Sweden)

    S. R. Kuzmin

    2016-12-01

    Full Text Available Percent of latewood of Boguchany and Suzun Scots pine climatypes has been studied in two provenance trials (place of origin and trial place. For Boguchany climatype the place of origin is south taiga of Central Siberia (Krasnoyarsk Krai, the place of trial is forest-steppe zone of Western Siberia (Novosibirsk Oblast and vice versa for Suzun climatype – forest-steppe zone of Western Siberia is the place of origin, south taiga is the place of trial. Comparison of annual average values of latewood percent of Boguchany climatype in south taiga and forest-steppe revealed the same numbers – 19 %. Annual variability of this trait in south taiga is distinctly lower and equal to 17 %, in forest-steppe – 35 %. Average annual values of latewood percent of Suzun climatype in the place of origin and trial place are close (20 and 21 %. Variability of this trait for Suzun climatype is higher than for Boguchany and equal to 23 % in south taiga and 42 % in forest-steppe. Climatic conditions in southern taiga in Central Siberia in comparison with forest-steppe in Western Siberia make differences between climatypes stronger. Differences between climatypes are expressed in different age of maximal increments of diameter, different tree ring width and latewood percent values and in different latewood reaction to weather conditions.

  11. The Alpha value decrease when the annual individual effective dose decreases?

    International Nuclear Information System (INIS)

    Sordi, Gian M.; Marchiusi, Thiago; Sousa, Jefferson de J.

    2008-01-01

    A recent IAEA publication tells that a few entities took different alpha values for maxima individual doses. Beyond to disregard the international agencies, that recommend only one alpha value for each country, the alpha values decreases when the individual doses decreases and the practice happens exactly the conversely as we will show in this paper. We will prove that the alpha value increase when the maximum individual doses decreases in a four different manner. The first one we call the theoretical conception and it is linked to the emergent of the ALARA policy and to the purpose that led to the 3/10 of the annual limits, for to decrease the individual doses as a first resort and a 1/10 as a last resort. The second prove will be based in a small mine example used in the ICRP publication number 55 concerning to the optimization and the quantitative decision-aiding techniques in radiological protection where we will determine the alpha value ranges in which each radiological protection options becomes the analytical solution. The third prove will be based in the determination of the optimized thickness example of a plane shielding for a radiation source exposed in the ICRP publication number 37. We will use, also, the numerical example provided there. Eventually, as four prove we will show that the alpha value dos not only increases with the maximum individual dose decrease, but also, with the shielding geometry. (author)

  12. Benefits of the maximum tolerated dose (MTD) and maximum tolerated concentration (MTC) concept in aquatic toxicology

    International Nuclear Information System (INIS)

    Hutchinson, Thomas H.; Boegi, Christian; Winter, Matthew J.; Owens, J. Willie

    2009-01-01

    There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic organisms and the

  13. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  14. Maximum speed of dewetting on a fiber

    NARCIS (Netherlands)

    Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed

  15. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  16. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  17. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  18. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  19. The maximum-entropy method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš; Schneider, M.

    2003-01-01

    Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003

  20. Achieving maximum sustainable yield in mixed fisheries

    NARCIS (Netherlands)

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna

    2017-01-01

    Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

  1. 5 CFR 534.203 - Maximum stipends.

    Science.gov (United States)

    2010-01-01

    ... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...

  2. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  3. 40 CFR 60.1450 - How must I monitor opacity for air curtain incinerators that burn 100 percent yard waste?

    Science.gov (United States)

    2010-07-01

    ... curtain incinerators that burn 100 percent yard waste? 60.1450 Section 60.1450 Protection of Environment... Air Curtain Incinerators That Burn 100 Percent Yard Waste § 60.1450 How must I monitor opacity for air curtain incinerators that burn 100 percent yard waste? (a) Use EPA Reference Method 9 in appendix A of...

  4. 40 CFR 60.1925 - How must I monitor opacity for air curtain incinerators that burn 100 percent yard waste?

    Science.gov (United States)

    2010-07-01

    ... curtain incinerators that burn 100 percent yard waste? 60.1925 Section 60.1925 Protection of Environment... or Before August 30, 1999 Model Rule-Air Curtain Incinerators That Burn 100 Percent Yard Waste § 60.1925 How must I monitor opacity for air curtain incinerators that burn 100 percent yard waste? (a) Use...

  5. 40 CFR 62.15375 - What are the emission limits for air curtain incinerators that burn 100 percent yard waste?

    Science.gov (United States)

    2010-07-01

    ... curtain incinerators that burn 100 percent yard waste? 62.15375 Section 62.15375 Protection of Environment... Combustion Units Constructed on or Before August 30, 1999 Air Curtain Incinerators That Burn 100 Percent Yard Waste § 62.15375 What are the emission limits for air curtain incinerators that burn 100 percent yard...

  6. 40 CFR 60.1445 - What are the emission limits for air curtain incinerators that burn 100 percent yard waste?

    Science.gov (United States)

    2010-07-01

    ... curtain incinerators that burn 100 percent yard waste? 60.1445 Section 60.1445 Protection of Environment... Air Curtain Incinerators That Burn 100 Percent Yard Waste § 60.1445 What are the emission limits for air curtain incinerators that burn 100 percent yard waste? If your air curtain incinerator combusts...

  7. 40 CFR 62.15380 - How must I monitor opacity for air curtain incinerators that burn 100 percent yard waste?

    Science.gov (United States)

    2010-07-01

    ... curtain incinerators that burn 100 percent yard waste? 62.15380 Section 62.15380 Protection of Environment... Combustion Units Constructed on or Before August 30, 1999 Air Curtain Incinerators That Burn 100 Percent Yard Waste § 62.15380 How must I monitor opacity for air curtain incinerators that burn 100 percent yard...

  8. 40 CFR 60.1920 - What are the emission limits for air curtain incinerators that burn 100 percent yard waste?

    Science.gov (United States)

    2010-07-01

    ... curtain incinerators that burn 100 percent yard waste? 60.1920 Section 60.1920 Protection of Environment... or Before August 30, 1999 Model Rule-Air Curtain Incinerators That Burn 100 Percent Yard Waste § 60.1920 What are the emission limits for air curtain incinerators that burn 100 percent yard waste? If...

  9. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  10. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  11. Austrian Business Cycle Theory: Are 100 Percent Reserves Sufficient to Prevent a Business Cycle?

    Directory of Open Access Journals (Sweden)

    Philipp Bagus

    2010-02-01

    Full Text Available Authors in the Austrian tradition have made the credit expansion of a fractional reserve banking system as the prime cause of business cycles. Authors such as Selgin (1988 and White (1999 have argued that a solution to this problem would be a free banking system. They maintain that the competition between banks would limit the credit expansion effectively. Other authors such as Rothbard (1991 and Huerta de Soto (2006 have gone further and advocated a 100 percent reserve banking system ruling out credit expansion altogether. In this article it is argued that a 100 percent reserve system can still bring about business cycles through excessive maturity mismatching between deposits and loans.

  12. Amazing 7-day, super-simple, scripted guide to teaching or learning percents

    CERN Document Server

    Hernandez, Lisa

    2014-01-01

    Welcome to The Amazing 7-Day, Super-Simple, Scripted Guide to Teaching or Learning Percents. I have attempted to do just what the title says: make learning percents super simple. I have also attempted to make it fun and even ear-catching. The reason for this is not that I am a frustrated stand-up comic, but because in my fourteen years of teaching the subject, I have come to realize that my jokes, even the bad ones, have a crazy way of sticking in my students' heads. And should I use a joke (even a bad one) repetitively, the associations become embedded in their brains, many times to their cha

  13. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  14. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  15. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  16. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  17. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  18. Maximum parsimony on subsets of taxa.

    Science.gov (United States)

    Fischer, Mareike; Thatte, Bhalchandra D

    2009-09-21

    In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.

  19. Tranexamic Acid Reduced the Percent of Total Blood Volume Lost During Adolescent Idiopathic Scoliosis Surgery.

    Science.gov (United States)

    Jones, Kristen E; Butler, Elissa K; Barrack, Tara; Ledonio, Charles T; Forte, Mary L; Cohn, Claudia S; Polly, David W

    2017-01-01

    Multilevel posterior spine fusion is associated with significant intraoperative blood loss. Tranexamic acid is an antifibrinolytic agent that reduces intraoperative blood loss. The goal of this study was to compare the percent of total blood volume lost during posterior spinal fusion (PSF) with or without tranexamic acid in patients with adolescent idiopathic scoliosis (AIS). Thirty-six AIS patients underwent PSF in 2011-2014; the last half (n=18) received intraoperative tranexamic acid. We retrieved relevant demographic, hematologic, intraoperative and outcomes information from medical records. The primary outcome was the percent of total blood volume lost, calculated from estimates of intraoperative blood loss (numerator) and estimated total blood volume per patient (denominator, via Nadler's equations). Unadjusted outcomes were compared using standard statistical tests. Tranexamic acid and no-tranexamic acid groups were similar (all p>0.05) in mean age (16.1 vs. 15.2 years), sex (89% vs. 83% female), body mass index (22.2 vs. 20.2 kg/m2), preoperative hemoglobin (13.9 vs. 13.9 g/dl), mean spinal levels fused (10.5 vs. 9.6), osteotomies (1.6 vs. 0.9) and operative duration (6.1 hours, both). The percent of total blood volume lost (TBVL) was significantly lower in the tranexamic acid-treated vs. no-tranexamic acid group (median 8.23% vs. 14.30%, p = 0.032); percent TBVL per level fused was significantly lower with tranexamic acid than without it (1.1% vs. 1.8%, p=0.048). Estimated blood loss (milliliters) was similar across groups. Tranexamic acid significantly reduced the percentage of total blood volume lost versus no tranexamic acid in AIS patients who underwent PSF using a standardized blood loss measure.Level of Evidence: 3. Institutional Review Board status: This medical record chart review (minimal risk) study was approved by the University of Minnesota Institutional Review Board.

  20. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  1. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  2. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  3. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  4. Maximum-confidence discrimination among symmetric qudit states

    International Nuclear Information System (INIS)

    Jimenez, O.; Solis-Prosser, M. A.; Delgado, A.; Neves, L.

    2011-01-01

    We study the maximum-confidence (MC) measurement strategy for discriminating among nonorthogonal symmetric qudit states. Restricting to linearly dependent and equally likely pure states, we find the optimal positive operator valued measure (POVM) that maximizes our confidence in identifying each state in the set and minimizes the probability of obtaining inconclusive results. The physical realization of this POVM is completely determined and it is shown that after an inconclusive outcome, the input states may be mapped into a new set of equiprobable symmetric states, restricted, however, to a subspace of the original qudit Hilbert space. By applying the MC measurement again onto this new set, we can still gain some information about the input states, although with less confidence than before. This leads us to introduce the concept of sequential maximum-confidence (SMC) measurements, where the optimized MC strategy is iterated in as many stages as allowed by the input set, until no further information can be extracted from an inconclusive result. Within each stage of this measurement our confidence in identifying the input states is the highest possible, although it decreases from one stage to the next. In addition, the more stages we accomplish within the maximum allowed, the higher will be the probability of correct identification. We will discuss an explicit example of the optimal SMC measurement applied in the discrimination among four symmetric qutrit states and propose an optical network to implement it.

  5. Temporal divergence of percent body fat and body mass index in pre-teenage children: the LOOK longitudinal study.

    Science.gov (United States)

    Telford, R D; Cunningham, R B; Abhayaratna, W P

    2014-12-01

    The index of body mass related to stature, (body mass index, BMI, kgm(-2) ), is widely used as a proxy for percent body fat (%BF) in cross-sectional and longitudinal investigations. BMI does not distinguish between lean and fat mass and in children, the cross-sectional relationship between %BF and BMI changes with age and sex. While BMI increases linearly with age from age 8 to 12 years in both boys and girls, %BF plateaus off between 10 and 12 years. Repeated measures in children show a systematic decrease in %BF for any given BMI from age 8 to 10 to 12 years. Because changes in BMI misrepresent changes in %BF, its use as a proxy of %BF should be avoided in longitudinal studies in this age group. Body mass index (BMI, kgm(-2) ) is commonly used as an indicator of pediatric adiposity, but with its inability to distinguish changes in lean and fat mass, its use in longitudinal studies of children requires careful consideration. To investigate the suitability of BMI as a surrogate of percent body fat (%BF) in pediatric longitudinal investigations. In this longitudinal study, healthy Australian children (256 girls and 278 boys) were measured at ages 8.0 (standard deviation 0.3), 10.0 and 12.0 years for height, weight and percent body fat (%BF) by dual-energy X-ray absorptiometry. The patterns of change in the means of %BF and BMI were different (P < 0.001). While mean BMI increased linearly from 8 to 12 years of age, %BF did not change between 10 and 12 years. Relationships between %BF and BMI in boys and girls were curvilinear and varied with age (P < 0.001) and gender (P < 0.001); any given BMI corresponding with a lower %BF as a child became older. Considering the divergence of temporal patterns of %BF and BMI between 10 and 12 years of age, employment of BMI as a proxy for %BF in absolute or age and sex standardized forms in pediatric longitudinal investigations is problematical. © 2013 The Authors. Pediatric Obesity © 2013 International Association

  6. Selective effects of weight and inertia on maximum lifting.

    Science.gov (United States)

    Leontijevic, B; Pazin, N; Kukolj, M; Ugarkovic, D; Jaric, S

    2013-03-01

    A novel loading method (loading ranged from 20% to 80% of 1RM) was applied to explore the selective effects of externally added simulated weight (exerted by stretched rubber bands pulling downward), weight+inertia (external weights added), and inertia (covariation of the weights and the rubber bands pulling upward) on maximum bench press throws. 14 skilled participants revealed a load associated decrease in peak velocity that was the least associated with an increase in weight (42%) and the most associated with weight+inertia (66%). However, the peak lifting force increased markedly with an increase in both weight (151%) and weight+inertia (160%), but not with inertia (13%). As a consequence, the peak power output increased most with weight (59%), weight+inertia revealed a maximum at intermediate loads (23%), while inertia was associated with a gradual decrease in the peak power output (42%). The obtained findings could be of importance for our understanding of mechanical properties of human muscular system when acting against different types of external resistance. Regarding the possible application in standard athletic training and rehabilitation procedures, the results speak in favor of applying extended elastic bands which provide higher movement velocity and muscle power output than the usually applied weights. © Georg Thieme Verlag KG Stuttgart · New York.

  7. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  8. Maximum power operation of interacting molecular motors

    DEFF Research Database (Denmark)

    Golubeva, Natalia; Imparato, Alberto

    2013-01-01

    , as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...

  9. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  10. On the maximum drawdown during speculative bubbles

    Science.gov (United States)

    Rotundo, Giulia; Navarra, Mauro

    2007-08-01

    A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.

  11. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  12. Conductivity maximum in a charged colloidal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Bastea, S

    2009-01-27

    Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.

  13. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  14. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  15. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  16. Multiperiod Maximum Loss is time unit invariant.

    Science.gov (United States)

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.

  17. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  18. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  19. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  20. TEM investigation of irradiated U-7 weight percent Mo dispersion fuel

    International Nuclear Information System (INIS)

    Van den Berghe, S.

    2009-01-01

    In the FUTURE experiment, fuel plates containing U-7 weight percent Mo atomized powder were irradiated in the BR2 reactor. At a burn-up of approximately 33 percent 235 U (6.5 percent FIMA or 1.41 10 21 fissions/cm 3 meat), the fuel plates showed an important deformation and the irradiation was stopped. The plates were submitted to detailed PIE at the Laboratory for High and Medium level Activity. The results of these examinations were reported in the scientific report of last year and published in open literature. Since then, the microstructural aspects of the FUTURE fuel were studied in more detail using transmission electron microscopy (TEM), in an attempt to understand the nature of the interaction phase and the fission gas behavior in the atomized U(Mo) fuel. The FUTURE experiment is regarded as the definitive proof that the classical atomized U(Mo) dispersion fuel is not stable under irradiation, at least in the conditions required for normal operation of plate-type fuel. The main cause for the instability was identified to be the irradiation behavior of the U(Mo)-Al interaction phase which is formed between the U(Mo) particles and the pure aluminum matrix during irradiation. It is assumed to become amorphous under irradiation and as such cannot retain the fission gas in stable bubbles. As a consequence, gas filled voids are generated between the interaction layer and the matrix, resulting in fuel plate pillowing and failure. The objective of the TEM investigation was the confirmation of this assumption of the amorphisation of the interaction phase. A deeper understanding of the actual nature of this layer and the fission gas behaviour in these fuels in general can allow a more oriented search for a solution to the fuel failures

  1. Residual volume on land and when immersed in water: effect on percent body fat.

    Science.gov (United States)

    Demura, Shinichi; Yamaji, Shunsuke; Kitabayashi, Tamotsu

    2006-08-01

    There is a large residual volume (RV) error when assessing percent body fat by means of hydrostatic weighing. It has generally been measured before hydrostatic weighing. However, an individual's maximal exhalations on land and in the water may not be identical. The aims of this study were to compare residual volumes and vital capacities on land and when immersed to the neck in water, and to examine the influence of the measurement error on percent body fat. The participants were 20 healthy Japanese males and 20 healthy Japanese females. To assess the influence of the RV error on percent body fat in both conditions and to evaluate the cross-validity of the prediction equation, another 20 males and 20 females were measured using hydrostatic weighing. Residual volume was measured on land and in the water using a nitrogen wash-out technique based on an open-circuit approach. In water, residual volume was measured with the participant sitting on a chair while the whole body, except the head, was submerged . The trial-to-trial reliabilities of residual volume in both conditions were very good (intraclass correlation coefficient > 0.98). Although residual volume measured under the two conditions did not agree completely, they showed a high correlation (males: 0.880; females: 0.853; P body fat computed using residual volume measured in both conditions was very good for both sexes (males: r = 0.902; females: r = 0.869, P body fat: -3.4 to 2.2% for males; -6.3 to 4.4% for females). We conclude that if these errors are of no importance, residual volume measured on land can be used when assessing body composition.

  2. Angiotensin-Converting Inhibitors and Angiotensin II Receptor Blockers and Longitudinal Change in Percent Emphysema on Computed Tomography. The Multi-Ethnic Study of Atherosclerosis Lung Study

    Science.gov (United States)

    Parikh, Megha A.; Aaron, Carrie P.; Hoffman, Eric A.; Schwartz, Joseph E.; Madrigano, Jaime; Austin, John H. M.; Lovasi, Gina; Watson, Karol; Stukovsky, Karen Hinckley

    2017-01-01

    Rationale: Although emphysema on computed tomography (CT) is associated with increased morbidity and mortality in patients with and without spirometrically defined chronic obstructive pulmonary disease, no available medications target emphysema outside of alpha-1 antitrypsin deficiency. Transforming growth factor-β and endothelial dysfunction are implicated in emphysema pathogenesis, and angiotensin II receptor blockers (ARBs) inhibit transforming growth factor-β, improve endothelial function, and restore airspace architecture in murine models. Evidence in humans is, however, lacking. Objectives: To determine whether angiotensin-converting enzyme (ACE) inhibitor and ARB dose is associated with slowed progression of percent emphysema by CT. Methods: The Multi-Ethnic Study of Atherosclerosis researchers recruited participants ages 45–84 years from the general population from 2000 to 2002. Medication use was assessed by medication inventory. Percent emphysema was defined as the percentage of lung regions less than −950 Hounsfield units on CTs. Mixed-effects regression models were used to adjust for confounders. Results: Among 4,472 participants, 12% used an ACE inhibitor and 6% used an ARB at baseline. The median percent emphysema was 3.0% at baseline, and the rate of progression was 0.64 percentage points over a median of 9.3 years. Higher doses of ACE or ARB were independently associated with a slower change in percent emphysema (P = 0.03). Over 10 years, in contrast to a predicted mean increase in percent emphysema of 0.66 percentage points in those who did not take ARBs or ACE inhibitors, the predicted mean increase in participants who used maximum doses of ARBs or ACE inhibitors was 0.06 percentage points (P = 0.01). The findings were of greatest magnitude among former smokers (P emphysema. There was no evidence that ACE inhibitor or ARB dose was associated with decline in lung function. Conclusions: In a large population-based study, ACE

  3. Health plan auditing: 100-percent-of-claims vs. random-sample audits.

    Science.gov (United States)

    Sillup, George P; Klimberg, Ronald K

    2011-01-01

    The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.

  4. Comparative evaluation of PSA-Density, percent free PSA and total PSA

    OpenAIRE

    Ströbel, Greta

    2010-01-01

    BACKGROUND The objective of this study was to evaluate the prostate specific antigen (PSA) density (PSAD) (the quotient of PSA and prostate volume) compared with the percent free PSA (%fPSA) and total PSA (tPSA) in different total PSA (tPSA) ranges from 2 ng/mL to 20 ng/mL. Possible cut-off levels depending on the tPSA should be established. METHODS In total, 1809 men with no pretreatment of the prostate were enrolled between 1996 and 2004. Total and free PSA were measured with t...

  5. Objective Bayesianism and the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Jon Williamson

    2013-09-01

    Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

  6. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  7. Effect of curing methods, packaging and gamma irradiation on the weight loss and dry matter percent of garlic during curing and storage

    International Nuclear Information System (INIS)

    Mahmoud, A.A.; El-Oksh, I.I.; Farag, S.E.A.

    1988-01-01

    The Egyptian garlic plants, showed higher percent of weight loss at 17 or 27 days from curing compared to those of Chinese plants. The curing period of 17 days seemed satisfactory for the Egyptian cultivar, whereas, 27 days seemed to be enough for the Chinese garlic. No significant differences were observed between common and shaded curing methods in weight loss per cent. The Chinese garlic contained higher dry matter percentage than those of the Egyptian cultivar. Shaded cured plants of the two cultivars contained higher dry matter percent than those subjected to the common curing methods. Irradiation of garlic bulbs, shaded curing method and sack packaging decreased, in general the weight loss during storage in comparison with other treatments

  8. Airplane radiation dose decrease during a strong Forbush decrease

    Czech Academy of Sciences Publication Activity Database

    Spurný, František; Kudela, K.; Dachev, T.

    2004-01-01

    Roč. 2, S05001 (2004), s. 1-4 ISSN 1542-7390 Grant - others:EC project(XE) FIGM-CT2000-00068 Institutional research plan: CEZ:AV0Z1048901 Keywords : airplane dose * Forbush decrease * cosmic rays Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders

  9. Running speed during training and percent body fat predict race time in recreational male marathoners.

    Science.gov (United States)

    Barandun, Ursula; Knechtle, Beat; Knechtle, Patrizia; Klipstein, Andreas; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-01-01

    Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners. Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times. After multivariate regression, running speed of the training units (β = -0.52, P marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r (2) = 0.44): race time ( minutes) = 326.3 + 2.394 × (percent body fat, %) - 12.06 × (speed in training, km/hours). Running speed during training sessions correlated with prerace percent body fat (r = 0.33, P = 0.0002). The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics. The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour) are two key factors for a fast marathon race time in recreational male marathoner runners.

  10. Relationship between depression with FEV1 percent predicted and BODE index in chronic obstructive pulmonary disease

    Science.gov (United States)

    Gunawan, H.; Hanum, H.; Abidin, A.; Hanida, W.

    2018-03-01

    WHO reported more than 3 million people die from COPD in 2012 and are expected to rank third after cardiovascular and cancer diseases in the future. Recent studies reported the prevalence of depression in COPD patients was higher than in control group. So, it’s important for clinicians to understand the relationship of depression symptoms with clinical aspects of COPD. For determining the association of depression symptoms with lung function and BODE index in patients with stable COPD, a cross-sectional study was in 98 stable COPD outpatients from January to June 2017. Data were analyzed using Independent t-test, Mann-Whitney test, and Spearman’s rank correlation. COPD patients with depression had higher mMRC scores, and lower FEV1 percent predicted, and then 6-Minutes Walk Test compared to those without depression. There was a moderate strength of correlation (r=-0.43) between depression symptoms and FEV1 percent predicted, and strong correlation (r=0.614) between depression symptoms and BODE index. It indicates that BODE index is more accurate to describe symptoms of depression in COPD patients.

  11. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  12. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  13. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  14. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  15. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  16. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  17. Higher percent body fat in young women with lower physical activity level and greater proportion Pacific Islander ancestry.

    Science.gov (United States)

    Black, Nate; Nabokov, Vanessa; Vijayadeva, Vinutha; Novotny, Rachel

    2011-11-01

    Samoan women exhibit high rates of obesity, which can possibly be attenuated through diet and physical activity. Obesity, and body fatness in particular, is associated with increased risk for chronic diseases. Ancestry, physical activity, and dietary patterns have been associated with body composition. Using a cross-sectional design, the relative importance of proportion of Pacific Islander (PI) ancestry, level of physical activity, and macronutrients among healthy women in Honolulu, Hawai'i, ages 18 to 28 years was examined. All data were collected between January 2003 and December 2004. Percent body fat (%BF) was determined by whole body dual energy x-ray absorptiometry (DXA). Nutrient data were derived from a three-day food record. Means and standard deviations were computed for all variables of interest. Bivariate correlation analysis was used to determine correlates of %BF. Multiple regression analysis was used to determine relative contribution of variables significantly associated with %BF. Proportion of PI ancestry was significantly positively associated with %BF (P=0.0001). Physical activity level was significantly negatively associated with %BF (P=0.0006). Intervention to increase physical activity level of young Samoan women may be effective to decrease body fat and improve health. CRC-NIH grant: 0216.

  18. Carotid bifurcation calcium and correlation with percent stenosis of the internal carotid artery on CT angiography

    International Nuclear Information System (INIS)

    McKinney, Alexander M.; Casey, Sean O.; Teksam, Mehmet; Truwit, Charles L.; Kieffer, Stephen; Lucato, Leandro T.; Smith, Maurice

    2005-01-01

    The aim of this paper was to determine the correlation between calcium burden (expressed as a volume) and extent of stenosis of the origin of the internal carotid artery (ICA) by CT angiography (CTA). Previous studies have shown that calcification in the coronary arteries correlates with significant vessel stenosis, and severe calcification (measured by CT) in the carotid siphon correlates with significant (greater than 50% stenosis) as determined angiographically. Sixty-one patients (age range 50-85 years) underwent CT of the neck with intravenous administration of iodinated contrast for a variety of conditions. Images were obtained with a helical multidetector array CT scanner and reviewed on a three-dimensional workstation. A single observer manipulated window and level to segment calcified plaque from vascular enhancement in order to quantify vascular calcium volume (cc) in the region of the bifurcation of the common carotid artery/ICA origin, and to measure the extent of ICA stenosis near the origin. A total of 117 common carotid artery bifurcations were reviewed. A ''significant'' stenosis was defined arbitrarily as >40% (to detect lesions before they become hemodynamically significant) of luminal diameter on CTA using NASCET-like criteria. All ''significant'' stenoses (21 out of 117 carotid bifurcations) had measurable calcium. We found a relatively strong correlation between percent stenosis and the calcium volume (Pearson's r= 0.65, P<0.0001). We also found that there was an even stronger correlation between the square root of the calcium volume and the percent stenosis as measured by CTA (r= 0.77, P<0.0001). Calcium volumes of 0.01, 0.03, 0.06, 0.09 and 0.12 cc were used as thresholds to evaluate for a ''significant'' stenosis. A receiver operating characteristic (ROC) curve demonstrated that thresholds of 0.06 cc (sensitivity 88%, specificity 87%) and 0.03 cc (sensitivity 94%, specificity 76%) generated the best combinations of sensitivity and

  19. [Why is bread consumption decreasing?].

    Science.gov (United States)

    Rolland, M F; Chabert, C; Serville, Y

    1977-01-01

    In France bread plays a very special and ambivalent role among our foodstuffs because of the considerable drop in its consumption, its alleged harmful effects on our health and the respect in which it is traditionally held. More than half the 1 089 adults interviewed in this study say they have decreased their consumption of bread in the last 10 years. The reasons given vary according to age, body weight and urbanization level. The main reasons given for this restriction are the desire to prevent or reduce obesity, the decrease in physical activity, the general reduction in food consumption and the possibility of diversifying foods even further. Moreover, the decreasing appeal of bread in relation to other foods, as well as a modification in the structure of meals, in which bread becomes less useful to accompany other food, accentuate this loss of attraction. However, the respect for bread as part of the staple diet remains very acute as 95 p. 100 of those interviewed express a reluctance to throw bread away, more for cultural than economic reasons. Mechanization and urbanization having brought about a decrease in energy needs, the most common alimentary adaptation is general caloric restriction by which glucids, and especially bread, are curtailed.

  20. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  1. A fuzzy neural network model to forecast the percent cloud coverage and cloud top temperature maps

    Directory of Open Access Journals (Sweden)

    Y. Tulunay

    2008-12-01

    Full Text Available Atmospheric processes are highly nonlinear. A small group at the METU in Ankara has been working on a fuzzy data driven generic model of nonlinear processes. The model developed is called the Middle East Technical University Fuzzy Neural Network Model (METU-FNN-M. The METU-FNN-M consists of a Fuzzy Inference System (METU-FIS, a data driven Neural Network module (METU-FNN of one hidden layer and several neurons, and a mapping module, which employs the Bezier Surface Mapping technique. In this paper, the percent cloud coverage (%CC and cloud top temperatures (CTT are forecast one month ahead of time at 96 grid locations. The probable influence of cosmic rays and sunspot numbers on cloudiness is considered by using the METU-FNN-M.

  2. Phased Acoustic Array Measurements of a 5.75 Percent Hybrid Wing Body Aircraft

    Science.gov (United States)

    Burnside, Nathan J.; Horne, William C.; Elmer, Kevin R.; Cheng, Rui; Brusniak, Leon

    2016-01-01

    Detailed acoustic measurements of the noise from the leading-edge Krueger flap of a 5.75 percent Hybrid Wing Body (HWB) aircraft model were recently acquired with a traversing phased microphone array in the AEDC NFAC (Arnold Engineering Development Complex, National Full Scale Aerodynamics Complex) 40- by 80-Foot Wind Tunnel at NASA Ames Research Center. The spatial resolution of the array was sufficient to distinguish between individual support brackets over the full-scale frequency range of 100 to 2875 Hertz. For conditions representative of landing and take-off configuration, the noise from the brackets dominated other sources near the leading edge. Inclusion of flight-like brackets for select conditions highlights the importance of including the correct number of leading-edge high-lift device brackets with sufficient scale and fidelity. These measurements will support the development of new predictive models.

  3. Distribution of phytoplankton groups within the deep chlorophyll maximum

    KAUST Repository

    Latasa, Mikel

    2016-11-01

    The fine vertical distribution of phytoplankton groups within the deep chlorophyll maximum (DCM) was studied in the NE Atlantic during summer stratification. A simple but unconventional sampling strategy allowed examining the vertical structure with ca. 2 m resolution. The distribution of Prochlorococcus, Synechococcus, chlorophytes, pelagophytes, small prymnesiophytes, coccolithophores, diatoms, and dinoflagellates was investigated with a combination of pigment-markers, flow cytometry and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer. The more symmetrical distribution of chlorophyll than cells around the DCM peak was due to the increase of pigment per cell with depth. We found a vertical alignment of phytoplankton groups within the DCM layer indicating preferences for different ecological niches in a layer with strong gradients of light and nutrients. Prochlorococcus occupied the shallowest and diatoms the deepest layers. Dinoflagellates, Synechococcus and small prymnesiophytes preferred shallow DCM layers, and coccolithophores, chlorophytes and pelagophytes showed a preference for deep layers. Cell size within groups changed with depth in a pattern related to their mean size: the cell volume of the smallest group increased the most with depth while the cell volume of the largest group decreased the most. The vertical alignment of phytoplankton groups confirms that the DCM is not a homogeneous entity and indicates groups’ preferences for different ecological niches within this layer.

  4. Radiation pressure acceleration: The factors limiting maximum attainable ion energy

    Energy Technology Data Exchange (ETDEWEB)

    Bulanov, S. S.; Esarey, E.; Schroeder, C. B. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Bulanov, S. V. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); A. M. Prokhorov Institute of General Physics RAS, Moscow 119991 (Russian Federation); Esirkepov, T. Zh.; Kando, M. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); Pegoraro, F. [Physics Department, University of Pisa and Istituto Nazionale di Ottica, CNR, Pisa 56127 (Italy); Leemans, W. P. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Physics Department, University of California, Berkeley, California 94720 (United States)

    2016-05-15

    Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it transparent for radiation and effectively terminating the acceleration. The off-normal incidence of the laser on the target, due either to the experimental setup, or to the deformation of the target, will also lead to establishing a limit on maximum ion energy.

  5. Estimation of the volume and percent uptake of the liver and spleen by SPECT

    International Nuclear Information System (INIS)

    Yamagata, Atushi

    1988-01-01

    The volume and percent uptake of the liver and spleen were estimated with single photon emission computed tomography (SPECT) using 99m Tc-phytate. Clinical usefulness of these parameters was evaluated by comparison with other liver function tests in 87 patients including 25 normal controls, 24 liver cirrhosis and 16 other chronic liver diseases. SPECT images were obtained by Maxi Camera 400T. Cut-off level for reconstruction of images and relationship between counts and activity (mCi) were obtained from phantom studies. Volumes estimated using SPECT and computed tomography were compared in 16 patients. Results obtained were as follows. 1) Optimal cut-off level for measurement of volumes for the liver was 37 % and for the spleen was 42 %. 2) Correlation between organ volumes estimated with CT and SPECT was good (r = 0.92 for the liver and r = 0.96 for the spleen), although volumes measured with SPECT were larger than those with CT. 3) Significant differences of percent uptake were observed between normal controls and liver cirrhosis. 4) Better correlation between spleen volumes and uptake was recognized in cases without liver cirrhosis than in cases with liver cirrhosis. The spleen uptake in liver cirrhosis was higher than those in others in comparison with the volume. 5) The liver/spleen ratio of 99m Tc-phytae uptake could most clearly differentiate liver cirrhosis from others. 6) Negative correlation was observed between liver volume or uptake and ICG (R 15 ). Estimation of volume and uptake of the liver and spleen could be a useful procedure to assess liver function, probably related with effective hepatic blood flow in liver cirrhosis. (author)

  6. Severe geomagnetic storms and Forbush decreases: interplanetary relationships reexamined

    Directory of Open Access Journals (Sweden)

    R. P. Kane

    2010-02-01

    Full Text Available Severe storms (Dst and Forbush decreases (FD during cycle 23 showed that maximum negative Dst magnitudes usually occurred almost simultaneously with the maximum negative values of the Bz component of interplanetary magnetic field B, but the maximum magnitudes of negative Dst and Bz were poorly correlated (+0.28. A parameter Bz(CP was calculated (cumulative partial Bz as sum of the hourly negative values of Bz from the time of start to the maximum negative value. The correlation of negative Dst maximum with Bz(CP was higher (+0.59 as compared to that of Dst with Bz alone (+0.28. When the product of Bz with the solar wind speed V (at the hour of negative Bz maximum was considered, the correlation of negative Dst maximum with VBz was +0.59 and with VBz(CP, 0.71. Thus, including V improved the correlations. However, ground-based Dst values have a considerable contribution from magnetopause currents (several tens of nT, even exceeding 100 nT in very severe storms. When their contribution is subtracted from Dst(nT, the residue Dst* representing true ring current effect is much better correlated with Bz and Bz(CP, but not with VBz or VBz(CP, indicating that these are unimportant parameters and the effect of V is seen only through the solar wind ram pressure causing magnetopause currents. Maximum negative Dst (or Dst* did not occur at the same hour as maximum FD. The time evolutions of Dst and FD were very different. The correlations were almost zero. Basically, negative Dst (or Dst* and FDs are uncorrelated, indicating altogether different mechanism.

  7. The decrease in yield strength in NiAl due to hydrostatic pressure

    Science.gov (United States)

    Margevicius, R. W.; Lewandowski, J. J.; Locci, I.

    1992-01-01

    The decrease in yield strength in NiAl due to hydrostatic pressure is examined via a comparison of the tensile flow behavior in the low strain regime at 0.1 MPa for NiAl which was cast, extruded, and annealed for 2 hr at 827 C in argon and very slowly cooled to room temperature. Pressurization to 1.4 GPa produces a subsequent reduction at 0.1 MP in proportional limit by 40 percent as well as a 25-percent reduction in the 0.2-percent offset yield strength, while pressurization with lower pressures produces a similar reduction, although smaller in magnitude.

  8. Personal Best Time, Percent Body Fat, and Training Are Differently Associated with Race Time for Male and Female Ironman Triathletes

    Science.gov (United States)

    Knechtle, Beat; Wirth, Andrea; Baumann, Barbara; Knechtle, Patrizia; Rosemann, Thomas

    2010-01-01

    We studied male and female nonprofessional Ironman triathletes to determine whether percent body fat, training, and/or previous race experience were associated with race performance. We used simple linear regression analysis, with total race time as the dependent variable, to investigate the relationship among athletes' percent body fat, average…

  9. 26 CFR 1.46-9 - Requirements for taxpayers electing an extra one-half percent additional investment credit.

    Science.gov (United States)

    2010-04-01

    ... percent additional investment credit for property described in section 46(a)(2)(D). Paragraph (c) of this...-half percent additional investment credit. 1.46-9 Section 1.46-9 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY INCOME TAX INCOME TAXES Rules for Computing Credit for Investment in...

  10. The Use of Gas Chromatography and Mass Spectrometry to Introduce General Chemistry Students to Percent Mass and Atomic Mass Calculations

    Science.gov (United States)

    Pfennig, Brian W.; Schaefer, Amy K.

    2011-01-01

    A general chemistry laboratory experiment is described that introduces students to instrumental analysis using gas chromatography-mass spectrometry (GC-MS), while simultaneously reinforcing the concepts of mass percent and the calculation of atomic mass. Working in small groups, students use the GC to separate and quantify the percent composition…

  11. 40 CFR 63.5885 - How do I calculate percent reduction to demonstrate compliance for continuous lamination/casting...

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true How do I calculate percent reduction to... Pollutants: Reinforced Plastic Composites Production Testing and Initial Compliance Requirements § 63.5885 How do I calculate percent reduction to demonstrate compliance for continuous lamination/casting...

  12. SERIAL PERCENT-FREE PSA IN COMBINATION WITH PSA FOR POPULATION-BASED EARLY DETECTION OF PROSTATE CANCER

    Science.gov (United States)

    Ankerst, Donna Pauler; Gelfond, Jonathan; Goros, Martin; Herrera, Jesus; Strobl, Andreas; Thompson, Ian M.; Hernandez, Javier; Leach, Robin J.

    2016-01-01

    PURPOSE To characterize the diagnostic properties of serial percent-free prostate-specific antigen (PSA) in relation to PSA in a multi-ethnic, multi-racial cohort of healthy men. MATERIALS AND METHODS 6,982 percent-free PSA and PSA measures were obtained from participants in a 12 year+ Texas screening study comprising 1625 men who never underwent biopsy, 497 who underwent one or more biopsies negative for prostate cancer, and 61 diagnosed with prostate cancer. Area underneath the receiver-operating-characteristic-curve (AUC) for percent-free PSA, and the proportion of patients with fluctuating values across multiple visits were determined according to two thresholds (under 15% versus 25%) were evaluated. The proportion of cancer cases where percent-free PSA indicated a positive test before PSA > 4 ng/mL did and the number of negative biopsies that would have been spared by percent-free PSA testing negative were computed. RESULTS Percent-free PSA fluctuated around its threshold of PSA tested positive earlier than PSA in 71.4% (34.2%) of cancer cases, and among men with multiple negative biopsies and a PSA > 4 ng/mL, percent-free PSA would have tested negative in 31.6% (65.8%) instances. CONCLUSIONS Percent-free PSA should accompany PSA testing in order to potentially spare unnecessary biopsies or detect cancer earlier. When near the threshold, both tests should be repeated due to commonly observed fluctuation. PMID:26979652

  13. 40 CFR 62.14815 - What are the emission limitations for air curtain incinerators that burn 100 percent wood wastes...

    Science.gov (United States)

    2010-07-01

    ... air curtain incinerators that burn 100 percent wood wastes, clean lumber and/or yard waste? 62.14815... Requirements for Commercial and Industrial Solid Waste Incineration Units That Commenced Construction On or Before November 30, 1999 Air Curtain Incinerators That Burn 100 Percent Wood Wastes, Clean Lumber And/or...

  14. Maximum mass of magnetic white dwarfs

    International Nuclear Information System (INIS)

    Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez

    2015-01-01

    We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)

  15. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  16. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  17. Maximum Margin Clustering of Hyperspectral Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  18. Paving the road to maximum productivity.

    Science.gov (United States)

    Holland, C

    1998-01-01

    "Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.

  19. Maximum power flux of auroral kilometric radiation

    International Nuclear Information System (INIS)

    Benson, R.F.; Fainberg, J.

    1991-01-01

    The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3

  20. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  1. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  2. Magical thinking decreases across adulthood.

    Science.gov (United States)

    Brashier, Nadia M; Multhaup, Kristi S

    2017-12-01

    Magical thinking, or illogical causal reasoning such as superstitions, decreases across childhood, but almost no data speak to whether this developmental trajectory continues across the life span. In four experiments, magical thinking decreased across adulthood. This pattern replicated across two judgment domains and could not be explained by age-related differences in tolerance of ambiguity, domain-specific knowledge, or search for meaning. These data complement and extend findings that experience, accumulated over decades, guides older adults' judgments so that they match, or even exceed, young adults' performance. They also counter participants' expectations, and cultural sayings (e.g., "old wives' tales"), that suggest that older adults are especially superstitious. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Does fertility decrease household consumption?

    OpenAIRE

    Jungho Kim; Henriette Engelhardt; Alexia Fürnkranz-Prskawetz; Arnstein Aassve

    2009-01-01

    This paper presents an empirical analysis of the relationship between fertility and a direct measure of poverty for Indonesia, a country, which has experienced unprecedented economic growth and sharp fertility declines over recent decades. It focuses on illustrating the sensitivity of the effect of fertility on household consumption with respect to the equivalence scale by applying the propensity score matching method. The analysis suggests that a newborn child decreases household consumption...

  4. A discussion about maximum uranium concentration in digestion solution of U3O8 type uranium ore concentrate

    International Nuclear Information System (INIS)

    Xia Dechang; Liu Chao

    2012-01-01

    On the basis of discussing the influence of single factor on maximum uranium concentration in digestion solution,the influence degree of some factors such as U content, H 2 O content, mass ratio of P and U was compared and analyzed. The results indicate that the relationship between U content and maximum uranium concentration in digestion solution was direct ratio, while the U content increases by 1%, the maximum uranium concentration in digestion solution increases by 4.8%-5.7%. The relationship between H 2 O content and maximum uranium concentration in digestion solution was inverse ratio, the maximum uranium concentration in digestion solution decreases by 46.1-55.2 g/L while H 2 O content increases by 1%. The relationship between mass ratio of P and U and maximum uranium concentration in digestion solution was inverse ratio, the maximum uranium concentration in digestion solution decreases by 116.0-181.0 g/L while the mass ratio of P and U increase 0.1%. When U content equals 62.5% and the influence of mass ratio of P and U is no considered, the maximum uranium concentration in digestion solution equals 1 578 g/L; while mass ratio of P and U equals 0.35%, the maximum uranium concentration decreases to 716 g/L, the decreased rate is 54.6%, so the mass ratio of P and U in U 3 O 8 type uranium ore concentrate is the main controlling factor. (authors)

  5. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  6. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  7. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  8. Percent wall thickness evaluated by Gd-DTPA enhanced cine MRI as an indicator of local parietal movement in hypertrophic cardiomyopathy

    International Nuclear Information System (INIS)

    Hirano, Masaharu

    1998-01-01

    Hypertrophic cardiomyopathy (HCM) is a cardiac disease, the basic pathology of which consists of a decrease in left ventricular dilation compliance due to uneven hypertrophy of the left ventricular wall. Magnetic resonance imaging (MRI) is useful in monitoring uneven parietal hypertrophy and kinetics in HCM patients. The present study was undertaken in 47 HCM patients who showed asymmetrical septal hypertrophy to determine if percent thickness can be an indicator of left ventricular local movement using cine MRI. Longest and shortest axis images were acquired by the ECG synchronization method using a 1.5 T MR imager. Cardiac function was analyzed based on longest axis cine images, and telediastolic and telesystolic parietal thickness were measured based on shorter axis cine images at the papillary muscle level. Parietal movement index and percent thickness were used as indicators of local parietal movement. The correlation between these indicators and parietal thickness was evaluated. The percent thickness changed at an earlier stage of hypertrophy than the parietal movement index, thus it is thought to be useful in detecting left ventricular parietal movement disorders at an early stage of HCM. (author)

  9. Percent wall thickness evaluated by Gd-DTPA enhanced cine MRI as an indicator of local parietal movement in hypertrophic cardiomyopathy

    Energy Technology Data Exchange (ETDEWEB)

    Hirano, Masaharu [Tokyo Medical Coll. (Japan)

    1998-11-01

    Hypertrophic cardiomyopathy (HCM) is a cardiac disease, the basic pathology of which consists of a decrease in left ventricular dilation compliance due to uneven hypertrophy of the left ventricular wall. Magnetic resonance imaging (MRI) is useful in monitoring uneven parietal hypertrophy and kinetics in HCM patients. The present study was undertaken in 47 HCM patients who showed asymmetrical septal hypertrophy to determine if percent thickness can be an indicator of left ventricular local movement using cine MRI. Longest and shortest axis images were acquired by the ECG synchronization method using a 1.5 T MR imager. Cardiac function was analyzed based on longest axis cine images, and telediastolic and telesystolic parietal thickness were measured based on shorter axis cine images at the papillary muscle level. Parietal movement index and percent thickness were used as indicators of local parietal movement. The correlation between these indicators and parietal thickness was evaluated. The percent thickness changed at an earlier stage of hypertrophy than the parietal movement index, thus it is thought to be useful in detecting left ventricular parietal movement disorders at an early stage of HCM. (author)

  10. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  11. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  12. Changes in atmospheric circulation between solar maximum and minimum conditions in winter and summer

    Science.gov (United States)

    Lee, Jae Nyung

    2008-10-01

    variability over the Asian monsoon region. The corresponding EOF in ModelE has a qualitatively similar structure but with less variability in the Asian monsoon region which is displaced eastward of its observed position. In both the NCEP/NCAR reanalysis and the GISS GCM, the negative anomalies associated with the NAM in the Euro-Atlantic and Aleutian island regions are enhanced in the solar minimum conditions, though the results are not statistically significant. The difference of the downward propagation of NAM between solar maximum and solar minimum is shown with the NCEP/NCAR reanalysis. For the winter NAM, a much greater fraction of stratospheric circulation perturbations penetrate to the surface in solar maximum conditions than in minimum conditions. This difference is more striking when the zonal wind direction in the tropics is from the west: when equatorial 50 hPa winds are from the west, no stratospheric signals reach the surface under solar minimum conditions, while over 50 percent reach the surface under solar maximum conditions. This work also studies the response of the tropical circulation to the solar forcing in combination with different atmospheric compositions and with different ocean modules. Four model experiments have been designed to investigate the role of solar forcing in the tropical circulation: one with the present day (PD) greenhouse gases and aerosol conditions, one with the preindustrial (PI) conditions, one with the doubled minimum solar forcing, and finally one with the hybrid-isopycnic ocean model (HYCOM). The response patterns in the tropical humidity and in the vertical motion due to solar forcing are season dependent and spatially heterogeneous. The tropical humidity response from the model experiments are compared with the corresponding differences obtained from the NCEP/NCAR reanalysis with all years and with non-ENSO years. Both the model and the reanalysis consistently show that the specific humidity is significantly greater in the

  13. Decreasing Fires in Mediterranean Europe.

    Directory of Open Access Journals (Sweden)

    Marco Turco

    Full Text Available Forest fires are a serious environmental hazard in southern Europe. Quantitative assessment of recent trends in fire statistics is important for assessing the possible shifts induced by climate and other environmental/socioeconomic changes in this area. Here we analyse recent fire trends in Portugal, Spain, southern France, Italy and Greece, building on a homogenized fire database integrating official fire statistics provided by several national/EU agencies. During the period 1985-2011, the total annual burned area (BA displayed a general decreasing trend, with the exception of Portugal, where a heterogeneous signal was found. Considering all countries globally, we found that BA decreased by about 3020 km2 over the 27-year-long study period (i.e. about -66% of the mean historical value. These results are consistent with those obtained on longer time scales when data were available, also yielding predominantly negative trends in Spain and France (1974-2011 and a mixed trend in Portugal (1980-2011. Similar overall results were found for the annual number of fires (NF, which globally decreased by about 12600 in the study period (about -59%, except for Spain where, excluding the provinces along the Mediterranean coast, an upward trend was found for the longer period. We argue that the negative trends can be explained, at least in part, by an increased effort in fire management and prevention after the big fires of the 1980's, while positive trends may be related to recent socioeconomic transformations leading to more hazardous landscape configurations, as well as to the observed warming of recent decades. We stress the importance of fire data homogenization prior to analysis, in order to alleviate spurious effects associated with non-stationarities in the data due to temporal variations in fire detection efforts.

  14. Decreasing Fires in Mediterranean Europe.

    Science.gov (United States)

    Turco, Marco; Bedia, Joaquín; Di Liberto, Fabrizio; Fiorucci, Paolo; von Hardenberg, Jost; Koutsias, Nikos; Llasat, Maria-Carmen; Xystrakis, Fotios; Provenzale, Antonello

    2016-01-01

    Forest fires are a serious environmental hazard in southern Europe. Quantitative assessment of recent trends in fire statistics is important for assessing the possible shifts induced by climate and other environmental/socioeconomic changes in this area. Here we analyse recent fire trends in Portugal, Spain, southern France, Italy and Greece, building on a homogenized fire database integrating official fire statistics provided by several national/EU agencies. During the period 1985-2011, the total annual burned area (BA) displayed a general decreasing trend, with the exception of Portugal, where a heterogeneous signal was found. Considering all countries globally, we found that BA decreased by about 3020 km2 over the 27-year-long study period (i.e. about -66% of the mean historical value). These results are consistent with those obtained on longer time scales when data were available, also yielding predominantly negative trends in Spain and France (1974-2011) and a mixed trend in Portugal (1980-2011). Similar overall results were found for the annual number of fires (NF), which globally decreased by about 12600 in the study period (about -59%), except for Spain where, excluding the provinces along the Mediterranean coast, an upward trend was found for the longer period. We argue that the negative trends can be explained, at least in part, by an increased effort in fire management and prevention after the big fires of the 1980's, while positive trends may be related to recent socioeconomic transformations leading to more hazardous landscape configurations, as well as to the observed warming of recent decades. We stress the importance of fire data homogenization prior to analysis, in order to alleviate spurious effects associated with non-stationarities in the data due to temporal variations in fire detection efforts.

  15. Immigrants in the one percent: The national origin of top wealth owners.

    Science.gov (United States)

    Keister, Lisa A; Aronson, Brian

    2017-01-01

    Economic inequality in the United States is extreme, but little is known about the national origin of affluent households. Households in the top one percent by total wealth own vastly disproportionate quantities of household assets and have correspondingly high levels of economic, social, and political influence. The overrepresentation of white natives (i.e., those born in the U.S.) among high-wealth households is well-documented, but changing migration dynamics suggest that a growing portion of top households may be immigrants. Because no single survey dataset contains top wealth holders and data about country of origin, this paper uses two publicly-available data sets: the Survey of Consumer Finances (SCF) and the Survey of Income and Program Participation (SIPP). Multiple imputation is used to impute country of birth from the SIPP into the SCF. Descriptive statistics are used to demonstrate reliability of the method, to estimate the prevalence of immigrants among top wealth holders, and to document patterns of asset ownership among affluent immigrants. Significant numbers of top wealth holders who are usually classified as white natives may be immigrants. Many top wealth holders appear to be European and Canadian immigrants, and increasing numbers of top wealth holders are likely from Asia and Latin America as well. Results suggest that of those in the top one percent of wealth holders, approximately 3% are European and Canadian immigrants, .5% are from Mexico or Cuban, and 1.7% are from Asia (especially Hong Kong, Taiwan, Mainland China, and India). Ownership of key assets varies considerably across affluent immigrant groups. Although the percentage of top wealth holders who are immigrants is relatively small, these percentages represent large numbers of households with considerable resources and corresponding social and political influence. Evidence that the propensity to allocate wealth to real and financial assets varies across immigrant groups suggests that

  16. Technologies for Decreasing Mining Losses

    Science.gov (United States)

    Valgma, Ingo; Väizene, Vivika; Kolats, Margit; Saarnak, Martin

    2013-12-01

    In case of stratified deposits like oil shale deposit in Estonia, mining losses depend on mining technologies. Current research focuses on extraction and separation possibilities of mineral resources. Selective mining, selective crushing and separation tests have been performed, showing possibilities of decreasing mining losses. Rock crushing and screening process simulations were used for optimizing rock fractions. In addition mine backfilling, fine separation, and optimized drilling and blasting have been analyzed. All tested methods show potential and depend on mineral usage. Usage in addition depends on the utilization technology. The questions like stability of the material flow and influences of the quality fluctuations to the final yield are raised.

  17. Maximum total organic carbon limits at different DWPF melter feed maters (U)

    International Nuclear Information System (INIS)

    Choi, A.S.

    1996-01-01

    The document presents information on the maximum total organic carbon (TOC) limits that are allowable in the DWPF melter feed without forming a potentially flammable vapor in the off-gas system were determined at feed rates varying from 0.7 to 1.5 GPM. At the maximum TOC levels predicted, the peak concentration of combustible gases in the quenched off-gas will not exceed 60 percent of the lower flammable limit during a 3X off-gas surge, provided that the indicated melter vapor space temperature and the total air supply to the melter are maintained. All the necessary calculations for this study were made using the 4-stage cold cap model and the melter off-gas dynamics model. A high-degree of conservatism was included in the calculational bases and assumptions. As a result, the proposed correlations are believed to by conservative enough to be used for the melter off-gas flammability control purposes

  18. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    International Nuclear Information System (INIS)

    Beer, M.

    1980-01-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates

  19. EFFECT OF CAFFEINE ON OXIDATIVE STRESS DURING MAXIMUM INCREMENTAL EXERCISE

    Directory of Open Access Journals (Sweden)

    Guillermo J. Olcina

    2006-12-01

    Full Text Available Caffeine (1,3,7-trimethylxanthine is an habitual substance present in a wide variety of beverages and in chocolate-based foods and it is also used as adjuvant in some drugs. The antioxidant ability of caffeine has been reported in contrast with its pro- oxidant effects derived from its action mechanism such as the systemic release of catecholamines. The aim of this work was to evaluate the effect of caffeine on exercise oxidative stress, measuring plasma vitamins A, E, C and malonaldehyde (MDA as markers of non enzymatic antioxidant status and lipid peroxidation respectively. Twenty young males participated in a double blind (caffeine 5mg·kg- 1 body weight or placebo cycling test until exhaustion. In the exercise test, where caffeine was ingested prior to the test, exercise time to exhaustion, maximum heart rate, and oxygen uptake significantly increased, whereas respiratory exchange ratio (RER decreased. Vitamins A and E decreased with exercise and vitamin C and MDA increased after both the caffeine and placebo tests but, regarding these particular variables, there were no significant differences between the two test conditions. The results obtained support the conclusion that this dose of caffeine enhances the ergospirometric response to cycling and has no effect on lipid peroxidation or on the antioxidant vitamins A, E and C

  20. Decreasing incidence rates of bacteremia

    DEFF Research Database (Denmark)

    Nielsen, Stig Lønberg; Pedersen, C; Jensen, T G

    2014-01-01

    BACKGROUND: Numerous studies have shown that the incidence rate of bacteremia has been increasing over time. However, few studies have distinguished between community-acquired, healthcare-associated and nosocomial bacteremia. METHODS: We conducted a population-based study among adults with first......-time bacteremia in Funen County, Denmark, during 2000-2008 (N = 7786). We reported mean and annual incidence rates (per 100,000 person-years), overall and by place of acquisition. Trends were estimated using a Poisson regression model. RESULTS: The overall incidence rate was 215.7, including 99.0 for community......-acquired, 50.0 for healthcare-associated and 66.7 for nosocomial bacteremia. During 2000-2008, the overall incidence rate decreased by 23.3% from 254.1 to 198.8 (3.3% annually, p incidence rate of community-acquired bacteremia decreased by 25.6% from 119.0 to 93.8 (3.7% annually, p

  1. Price of forest chips decreasing

    International Nuclear Information System (INIS)

    Hakkila, P.

    2001-01-01

    Use of forest chips was studied in 1999 in the national Puuenergia (Wood Energy) research program. Wood combusting heating plants were questioned about are the main reasons restricting the increment of the use of forest chips. Heating plants, which did not use forest chips at all or which used less than 250 m 3 (625 bulk- m 3 ) in 1999 were excluded. The main restrictions for additional use of forest chips were: too high price of forest chips; lack of suppliers and/or uncertainty of deliveries; technical problems of reception and processing of forest chips; insufficiency of boiler output especially in winter; and unsatisfactory quality of chips. The price of forest chips becomes relatively high because wood biomass used for production of forest chips has to be collected from wide area. Heavy equipment has to be used even though small fragments of wood are processed, which increases the price of chips. It is essential for forest chips that the costs can be pressed down because competition with fossil fuels, peat and industrial wood residues is hard. Low market price leads to the situation in which forest owner gets no price of the raw material, the entrepreneurs operate at the limit of profitability and renovation of machinery is difficult, and forest chips suppliers have to sell the chips at prime costs. Price of forest chips has decreased significantly during the past decade. Nominal price of forest chips is now lower than two decades ago. The real price of chips has decreased even more than the nominal price, 35% during the past decade and 20% during the last five years. Chips, made of small diameter wood, are expensive because the price includes the felling costs and harvesting is carried out at thinning lots. Price is especially high if chips are made of delimbed small diameter wood due to increased the work and reduced amount of chips. The price of logging residue chips is most profitable because cutting does not cause additional costs. Recovery of chips is

  2. Maximum entropy production rate in quantum thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible

  3. Role of percent tissue altered on ectasia after LASIK in eyes with suspicious topography.

    Science.gov (United States)

    Santhiago, Marcony R; Smadja, David; Wilson, Steven E; Krueger, Ronald R; Monteiro, Mario L R; Randleman, J Bradley

    2015-04-01

    To investigate the association of the percent tissue altered (PTA) with the occurrence of ectasia after LASIK in eyes with suspicious preoperative corneal topography. This retrospective comparative case-control study compared associations of reported ectasia risk factors in 129 eyes, including 57 eyes with suspicious preoperative Placido-based corneal topography that developed ectasia after LASIK (suspect ectasia group), 32 eyes with suspicious topography that remained stable for at least 3 years after LASIK (suspect control group), and 30 eyes that developed ectasia with bilateral normal topography (normal topography ectasia group). Groups were subdivided based on topographic asymmetry into high- or low-suspect groups. The PTA, preoperative central corneal thickness (CCT), residual stromal bed (RSB), and age (years) were evaluated in univariate and multivariate analyses. Average PTA values for normal topography ectasia (45), low-suspect ectasia (39), high-suspect ectasia (36), low-suspect control (32), and high-suspect control (29) were significantly different from one another in all comparisons (P topography ectasia groups, and CCT was not significantly different between any groups. Stepwise logistic regression revealed the PTA as the most significant independent variable (P topography. Less tissue alteration, or a lower PTA value, was necessary to induce ectasia in eyes with more remarkable signs of topographic abnormality, and PTA provided better discriminative capabilities than RSB for all study populations. Copyright 2015, SLACK Incorporated.

  4. Extrusion of the uranium-0.75 weight percent titanium alloy

    International Nuclear Information System (INIS)

    Jackson, R.J.; Lundberg, M.R.; Boland, J.F.

    1975-01-01

    Procedures are described for extruding the U--0.75 wt percent Ti alloy in the high alpha region (600 to 640 0 C) , and in the upper gamma region (900 to 1000 0 C). The casting of sound extrusion billets has importance in the production of sound extrusions, and procedures are given for casting sound billets up to 1,100 kilograms . Also important in producing sound extrusions is the use of glass lubricants. Reduction ratios of greater than 50 to 1 were achieved on reasonably sized billets. Extrusion constants of 48,000 pounds per square inch (psi) [296 megapascals (MPa)] for alpha phase (630 0 C) and 8,000 psi (56 MPa) for gamma phase (950 0 C) were achieved. Gamma-phase extrusion has preference over alpha-phase extrusion in that larger billets can be used and temperature control is not as critical. However alpha-phase extrusion offers better surface finish, less die wear, and fewer oxidation problems. Billets up to 14 inches in diameter have been successfully gamma-extruded and plans exist for extruding billets up to 20 inches (508 millimetres) in diameter. (U.S.)

  5. Percent body fat is a better predictor of cardiovascular risk factors than body mass index

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Qiang; Dong, Sheng-Yong; Sun, Xiao-Nan; Xie, Jing; Cui, Yi [International Medical Center, Chinese PLA General Hospital, Beijing (China)

    2012-04-20

    The objective of the present study was to evaluate the predictive values of percent body fat (PBF) and body mass index (BMI) for cardiovascular risk factors, especially when PBF and BMI are conflicting. BMI was calculated by the standard formula and PBF was determined by bioelectrical impedance analysis. A total of 3859 ambulatory adult Han Chinese subjects (2173 males and 1686 females, age range: 18-85 years) without a history of cardiovascular diseases were recruited from February to September 2009. Based on BMI and PBF, they were classified into group 1 (normal BMI and PBF, N = 1961), group 2 (normal BMI, but abnormal PBF, N = 381), group 3 (abnormal BMI, but normal PBF, N = 681), and group 4 (abnormal BMI and PBF, N = 836). When age, gender, lifestyle, and family history of obesity were adjusted, PBF, but not BMI, was correlated with blood glucose and lipid levels. The odds ratio (OR) and 95% confidence interval (CI) for cardiovascular risk factors in groups 2 and 4 were 1.88 (1.45-2.45) and 2.06 (1.26-3.35) times those in group 1, respectively, but remained unchanged in group 3 (OR = 1.32, 95%CI = 0.92-1.89). Logistic regression models also demonstrated that PBF, rather than BMI, was independently associated with cardiovascular risk factors. In conclusion, PBF, and not BMI, is independently associated with cardiovascular risk factors, indicating that PBF is a better predictor.

  6. EPIC Studies: Governments Finance, On Average, More Than 50 Percent Of Immunization Expenses, 2010-11.

    Science.gov (United States)

    Brenzel, Logan; Schütte, Carl; Goguadze, Keti; Valdez, Werner; Le Gargasson, Jean-Bernard; Guthrie, Teresa

    2016-02-01

    Governments in resource-poor settings have traditionally relied on external donor support for immunization. Under the Global Vaccine Action Plan, adopted in 2014, countries have committed to mobilizing additional domestic resources for immunization. Data gaps make it difficult to map how well countries have done in spending government resources on immunization to demonstrate greater ownership of programs. This article presents findings of an innovative approach for financial mapping of routine immunization applied in Benin, Ghana, Honduras, Moldova, Uganda, and Zambia. This approach uses modified System of Health Accounts coding to evaluate data collected from national and subnational levels and from donor agencies. We found that government sources accounted for 27-95 percent of routine immunization financing in 2011, with countries that have higher gross national product per capita better able to finance requirements. Most financing is channeled through government agencies and used at the primary care level. Sustainable immunization programs will depend upon whether governments have the fiscal space to allocate additional resources. Ongoing robust analysis of routine immunization should be instituted within the context of total health expenditure tracking. Project HOPE—The People-to-People Health Foundation, Inc.

  7. Percent body fat is a better predictor of cardiovascular risk factors than body mass index

    International Nuclear Information System (INIS)

    Zeng, Qiang; Dong, Sheng-Yong; Sun, Xiao-Nan; Xie, Jing; Cui, Yi

    2012-01-01

    The objective of the present study was to evaluate the predictive values of percent body fat (PBF) and body mass index (BMI) for cardiovascular risk factors, especially when PBF and BMI are conflicting. BMI was calculated by the standard formula and PBF was determined by bioelectrical impedance analysis. A total of 3859 ambulatory adult Han Chinese subjects (2173 males and 1686 females, age range: 18-85 years) without a history of cardiovascular diseases were recruited from February to September 2009. Based on BMI and PBF, they were classified into group 1 (normal BMI and PBF, N = 1961), group 2 (normal BMI, but abnormal PBF, N = 381), group 3 (abnormal BMI, but normal PBF, N = 681), and group 4 (abnormal BMI and PBF, N = 836). When age, gender, lifestyle, and family history of obesity were adjusted, PBF, but not BMI, was correlated with blood glucose and lipid levels. The odds ratio (OR) and 95% confidence interval (CI) for cardiovascular risk factors in groups 2 and 4 were 1.88 (1.45-2.45) and 2.06 (1.26-3.35) times those in group 1, respectively, but remained unchanged in group 3 (OR = 1.32, 95%CI = 0.92-1.89). Logistic regression models also demonstrated that PBF, rather than BMI, was independently associated with cardiovascular risk factors. In conclusion, PBF, and not BMI, is independently associated with cardiovascular risk factors, indicating that PBF is a better predictor

  8. The BDGP gene disruption project: Single transposon insertions associated with 40 percent of Drosophila genes

    Energy Technology Data Exchange (ETDEWEB)

    Bellen, Hugo J.; Levis, Robert W.; Liao, Guochun; He, Yuchun; Carlson, Joseph W.; Tsang, Garson; Evans-Holm, Martha; Hiesinger, P. Robin; Schulze, Karen L.; Rubin, Gerald M.; Hoskins, Roger A.; Spradling, Allan C.

    2004-01-13

    The Berkeley Drosophila Genome Project (BDGP) strives to disrupt each Drosophila gene by the insertion of a single transposable element. As part of this effort, transposons in more than 30,000 fly strains were localized and analyzed relative to predicted Drosophila gene structures. Approximately 6,300 lines that maximize genomic coverage were selected to be sent to the Bloomington Stock Center for public distribution, bringing the size of the BDGP gene disruption collection to 7,140 lines. It now includes individual lines predicted to disrupt 5,362 of the 13,666 currently annotated Drosophila genes (39 percent). Other lines contain an insertion at least 2 kb from others in the collection and likely mutate additional incompletely annotated or uncharacterized genes and chromosomal regulatory elements. The remaining strains contain insertions likely to disrupt alternative gene promoters or to allow gene mis-expression. The expanded BDGP gene disruption collection provides a public resource that will facilitate the application of Drosophila genetics to diverse biological problems. Finally, the project reveals new insight into how transposons interact with a eukaryotic genome and helps define optimal strategies for using insertional mutagenesis as a genomic tool.

  9. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  10. Study design and percent recoveries of anthropogenic organic compounds with and without the addition of ascorbic acid to preserve water samples containing free chlorine, 2004-06

    Science.gov (United States)

    Valder, Joshua F.; Delzer, Gregory C.; Price, Curtis V.; Sandstrom, Mark W.

    2008-01-01

    recoveries from the quenched reagent spiked samples that were analyzed at two different times (day 0 and day 7 or 14) can be used to determine the stability of the quenched samples held for an amount of time representative of the normal amount of time between sample collection and analysis. The comparison between the quenched reagent spiked samples and the LRSs can be used to determine if quenching samples adversely affects the analytical performance under controlled conditions. The field study began in 2004 and is continuing today (February 2008) to characterize the effect of quenching on field-matrix spike recoveries and to better understand the potential oxidation and transformation of 277 AOCs. Three types of samples were collected from 11 NAWQA Study Units across the Nation: (1) quenched finished-water samples (not spiked), (2) quenched finished-water spiked samples, and (3) nonquenched finished-water spiked samples. Percent recoveries of AOCs in quenched and nonquenched finished-water spiked samples collected during 2004-06 are presented. Comparisons of percent recoveries between quenched and nonquenched spiked samples can be used to show how quenching affects finished-water samples. A maximum of 6 surface-water and 7 ground-water quenched finished-water spiked samples paired with nonquenched finished-water spiked samples were analyzed. Analytical results for the field study are presented in two ways: (1) by surface-water supplies or ground-water supplies, and (2) by use (or source) group category for surface-water and ground-water supplies. Graphical representations of percent recoveries for the quenched and nonquenched finished-water spiked samples also are presented.

  11. Examination of temperature-induced shape memory of uranium--5.3-to 6.9 weight percent niobium alloys

    International Nuclear Information System (INIS)

    Hemperly, V.C.

    1976-01-01

    The uranium-niobium alloy system was examined in the range of 5.3-to-6.9 weight percent niobium with respect to shape memory, mechanical properties, metallography, Coefficients of linear thermal expansion, and differential thermal analysis. Shape memory increased with increasing niobium levels in the study range. There were no useful correlations found between shape memory and the other tests. Coefficients of linear thermal expansion tests of as-quenched 5.8 and 6.2 weight percent niobium specimens, but not 5.3 and 6.9 weight percent niobium specimens, had a contraction component on heating, but the phenomenon was not a contributor to shape memory

  12. Rigidity spectrum of Forbush decrease

    International Nuclear Information System (INIS)

    Sakakibara, S.; Munakata, K.; Nagashima, K.

    1985-01-01

    Using data from neutron monitors and muon telescopes at surface and underground stations, the average rigidity spectrum of Forbush decreases (Fds) during the period of 1978-1982 were obtained. Thirty eight Ed-events are classified into two groups, Hard Fd and Soft FD according to size of Fd at the Sakashita station. It is found that a spectral form of a fractional-power type (P to the-gamma sub 1 (P+P sub c) to the -gamma sub2) is more suitable than that of a power-exponential type or of a power type with an upper limiting rigidity. The best fitted spectrum of the fractional-power type is expressed by gamma sub1 = 0.37, gamma sub2 = 0.89 and P subc = 10 GV for Hard Fd and gamma sub1 = 0.77, gamma sub2 = 1.02 and P sub c - 14GV for Soft Fd

  13. Method of decreasing nuclear power

    International Nuclear Information System (INIS)

    Masuda, Hiromi

    1987-01-01

    Purpose: To easily attain the power decreasing in a HWLWR type reactor and improve the reactor safety. Method: The method is applied to a nuclear reactor in which the reactor reactivity is controlled by control rods and liquid posions dissolved in moderators. Means for forecasting the control rod operation amount required for the reactor power down and means for removing liquid poisons in the moderators are provided. The control rod operation amount required for the power down is forecast before the power down and the liquid poisons in the moderators are removed. Then, the control rods are inserted into a deep insertion position to reduce the reactor power. This invention can facilitate easy power down, as well as provide effects of improving the controllability in the usual operation and of avoiding abrupt power down which leads to an improved availability. (Kamimura, M.)

  14. Near fifty percent sodium substituted lanthanum manganites—A potential magnetic refrigerant for room temperature applications

    Energy Technology Data Exchange (ETDEWEB)

    Sethulakshmi, N.; Anantharaman, M. R., E-mail: mraiyer@yahoo.com [Department of Physics, Cochin University of Science and Technology, Cochin 682022, Kerala (India); Al-Omari, I. A. [Department of Physics, Sultan Qaboos University, PC 123 Muscat, Sultanate of Oman (Oman); Suresh, K. G. [Department of Physics, Indian Institute of Technology Bombay, Powai, Mumbai 400076 (India)

    2014-03-03

    Nearly half of lanthanum sites in lanthanum manganites were substituted with monovalent ion-sodium and the compound possessed distorted orthorhombic structure. Ferromagnetic ordering at 300 K and the magnetic isotherms at different temperature ranges were analyzed for estimating magnetic entropy variation. Magnetic entropy change of 1.5 J·kg{sup −1}·K{sup −1} was observed near 300 K. An appreciable magnetocaloric effect was also observed for a wide range of temperatures near 300 K for small magnetic field variation. Heat capacity was measured for temperatures lower than 300 K and the adiabatic temperature change increases with increase in temperature with a maximum of 0.62 K at 280 K.

  15. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  16. Pattern formation, logistics, and maximum path probability

    Science.gov (United States)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  17. Assessment of maximum available work of a hydrogen fueled compression ignition engine using exergy analysis

    International Nuclear Information System (INIS)

    Chintala, Venkateswarlu; Subramanian, K.A.

    2014-01-01

    This work is aimed at study of maximum available work and irreversibility (mixing, combustion, unburned, and friction) of a dual-fuel diesel engine (H 2 (hydrogen)–diesel) using exergy analysis. The maximum available work increased with H 2 addition due to reduction in irreversibility of combustion because of less entropy generation. The irreversibility of unburned fuel with the H 2 fuel also decreased due to the engine combustion with high temperature whereas there is no effect of H 2 on mixing and friction irreversibility. The maximum available work of the diesel engine at rated load increased from 29% with conventional base mode (without H 2 ) to 31.7% with dual-fuel mode (18% H 2 energy share) whereas total irreversibility of the engine decreased drastically from 41.2% to 39.3%. The energy efficiency of the engine with H 2 increased about 10% with 36% reduction in CO 2 emission. The developed methodology could also be applicable to find the effect and scope of different technologies including exhaust gas recirculation and turbo charging on maximum available work and energy efficiency of diesel engines. - Highlights: • Energy efficiency of diesel engine increases with hydrogen under dual-fuel mode. • Maximum available work of the engine increases significantly with hydrogen. • Combustion and unburned fuel irreversibility decrease with hydrogen. • No significant effect of hydrogen on mixing and friction irreversibility. • Reduction in CO 2 emission along with HC, CO and smoke emissions

  18. Winter ecology of the Porcupine caribou herd, Yukon: Part III, Role of day length in determining activity pattern and estimating percent lying

    Directory of Open Access Journals (Sweden)

    D. E. Russell

    1986-06-01

    Full Text Available Data on the activity pattern, proportion of time spent lying and the length of active and lying periods in winter are presented from a 3 year study on the Porcupine caribou herd. Animals were most active at sunrise and sunset resulting in from one (late fall, early and mid winter to two (early fall and late winter to three (spring intervening lying periods. Mean active/lying cycle length decreased from late fall (298 mm to early winter (238 min, increased to a peak in mid winter (340 min then declined in late winter (305 min and again in spring (240 min. Mean length of the lying period increased throughout the 3 winter months from 56 min m early winter to 114 min in mid winter and 153 min in late winter. The percent of the day animals spent lying decreased from fall to early winter, increased throughout the winter and declined in spring. This pattern was related, in part, to day length and was used to compare percent lying among herds. The relationship is suggested to be a means of comparing quality of winter ranges.

  19. Percent relative cumulative frequency analysis in indirect calorimetry: application to studies of transgenic mice.

    Science.gov (United States)

    Riachi, Marc; Himms-Hagen, Jean; Harper, Mary-Ellen

    2004-12-01

    Indirect calorimetry is commonly used in research and clinical settings to assess characteristics of energy expenditure. Respiration chambers in indirect calorimetry allow measurements over long periods of time (e.g., hours to days) and thus the collection of large sets of data. Current methods of data analysis usually involve the extraction of only a selected small proportion of data, most commonly the data that reflects resting metabolic rate. Here, we describe a simple quantitative approach for the analysis of large data sets that is capable of detecting small differences in energy metabolism. We refer to it as the percent relative cumulative frequency (PRCF) approach and have applied it to the study of uncoupling protein-1 (UCP1) deficient and control mice. The approach involves sorting data in ascending order, calculating their cumulative frequency, and expressing the frequencies in the form of percentile curves. Results demonstrate the sensitivity of the PRCF approach for analyses of oxygen consumption (.VO2) as well as respiratory exchange ratio data. Statistical comparisons of PRCF curves are based on the 50th percentile values and curve slopes (H values). The application of the PRCF approach revealed that energy expenditure in UCP1-deficient mice housed and studied at room temperature (24 degrees C) is on average 10% lower (p lower environmental temperature, there were no differences in .VO2 between groups. The latter is likely due to augmented shivering thermogenesis in UCP1-deficient mice compared with controls. With the increased availability of murine models of metabolic disease, indirect calorimetry is increasingly used, and the PRCF approach provides a novel and powerful means for data analysis.

  20. Increased circulating fibrocytes are associated with higher reticulocyte percent in children with sickle cell anemia.

    Science.gov (United States)

    Karafin, Matthew S; Dogra, Shibani; Rodeghier, Mark; Burdick, Marie; Mehrad, Borna; Rose, C Edward; Strieter, Robert M; DeBaun, Michael R; Strunk, Robert C; Field, Joshua J

    2016-03-01

    Interstitial lung disease is common in patients with sickle cell anemia (SCA). Fibrocytes are circulating cells implicated in the pathogenesis of pulmonary fibrosis and airway remodeling in asthma. In this study, we tested the hypotheses that fibrocyte levels are: (1) increased in children with SCA compared to healthy controls, and (2) associated with pulmonary disease. Cross-sectional cohort study of children with SCA who participated in the Sleep Asthma Cohort Study. Fibrocyte levels were obtained from 45 children with SCA and 24 controls. Mean age of SCA cases was 14 years and 53% were female. In children with SCA, levels of circulating fibrocytes were greater than controls (P < 0.01). The fibrocytes expressed a hierarchy of chemokine receptors, with CXCR4 expressed on the majority of cells and CCR2 and CCR7 expressed on a smaller subset. Almost half of fibrocytes demonstrated α-smooth muscle actin activation. Increased fibrocyte levels were associated with a higher reticulocyte count (P = 0.03) and older age (P = 0.048) in children with SCA. However, children with increased levels of fibrocytes were not more likely to have asthma or lower percent predicted forced expiratory volume in 1 sec/forced vital capacity (FEV1 /FVC) or FEV1 than those with lower fibrocyte levels. Higher levels of fibrocytes in children with SCA compared to controls may be due to hemolysis. Longitudinal studies may be able to better assess the relationship between fibrocyte level and pulmonary dysfunction. © 2015 Wiley Periodicals, Inc.

  1. Comparison of breast percent density estimation from raw versus processed digital mammograms

    Science.gov (United States)

    Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina

    2011-03-01

    We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.

  2. Does One Know the Properties of a MICE Solid or Liquid Absorber to Better than 0.3 Percent?

    International Nuclear Information System (INIS)

    Green, Michael A.; Yang, Stephanie Q.

    2006-01-01

    This report discusses the report discusses whether the MICE absorbers can be characterized to ±0.3 percent, so that one predict absorber ionization cooling within the absorber. This report shows that most solid absorbers can be characterized to much better than ±0.3 percent. The two issues that dominate the characterization of the liquid cryogen absorbers are the dimensions of the liquid in the vessel and the density of the cryogenic liquid. The thickness of the window also plays a role. This report will show that a liquid hydrogen absorber can be characterized to better than ±0.3 percent, but a liquid helium absorber cannot be characterized to better and ±1 percent

  3. Map of percent scleractinian coral cover along camera tows and ROV tracks in the Auau Channel, Island of Maui, Hawaii

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral overlaid on bathymetry and landsat imagery. Optical data were...

  4. EnviroAtlas - Percent of Each 12-Digit HUC in the Contiguous U.S. with Potentially Restorable Wetlands

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the percent of each 12-digit Hydrologic Unit (HUC) subwatershed in the contiguous U.S. with potentially restorable wetlands. Beginning...

  5. EnviroAtlas - Percent Land Cover with Potentially Restorable Wetlands on Agricultural Land per 12-Digit HUC - Contiguous United States

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the percent land cover with potentially restorable wetlands on agricultural land for each 12-digit Hydrologic Unit (HUC) watershed in...

  6. Map of percent scleractinian coral cover and sand along camera tows and ROV tracks of West Maui, Hawaii

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This map displays optical validation observation locations and percent coverage of scleractinian coral and sand overlaid on bathymetry and landsat imagery. Optical...

  7. FINANCIAL PROBLEMS OF BOTTOM 40 PERCENT BUMIPUTERA IN MALAYSIA: A POSSIBLE SOLUTION THROUGH WAQF-BASED CROWDFUNDING

    OpenAIRE

    Abidullah ABID; Muhammad Hakimi Mohd SHAFIAI

    2017-01-01

    Low savings by the bottom 40 percent Bumiputera triggered low wealth accumulation and greater wealth inequality. The issue behind the low savings is the increase in food prices, taxes, and interest rates for the borrowers. In response to these problems, the Malaysian government provides cash transfer BRIM as one of redistributive measures. However, it is still not enough as many of the bottom 40 percent are neglected to avail the facility. In such circumstances, the role of community-based ca...

  8. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  9. Maximum power per VA control of vector controlled interior ...

    Indian Academy of Sciences (India)

    Thakur Sumeet Singh

    2018-04-11

    Apr 11, 2018 ... Department of Electrical Engineering, Indian Institute of Technology Delhi, New ... The MPVA operation allows maximum-utilization of the drive-system. ... Permanent magnet motor; unity power factor; maximum VA utilization; ...

  10. Electron density distribution in Si and Ge using multipole, maximum ...

    Indian Academy of Sciences (India)

    Si and Ge has been studied using multipole, maximum entropy method (MEM) and ... and electron density distribution using the currently available versatile ..... data should be subjected to maximum possible utility for the characterization of.

  11. Forbush decreases observed 40 mwe underground in 1978

    International Nuclear Information System (INIS)

    Benko, G.; Kecskemety, K.; Neuprandt, G.; Somogyi, A.J.

    1982-01-01

    Forbush decreases observed 40 mwe underground at Budapest in the first half of 1978 have been analysed together with the data of several neutron monitor stations in Europe. Assuming a power-exponential type spectrum for the variations spectrum in space as a function of rigidity, the best fitting values of power and upper cut-off rigidity have been calculated from maximum decrement by means of the weighted least squares method

  12. 40 CFR 141.13 - Maximum contaminant levels for turbidity.

    Science.gov (United States)

    2010-07-01

    ... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...

  13. Maximum Power Training and Plyometrics for Cross-Country Running.

    Science.gov (United States)

    Ebben, William P.

    2001-01-01

    Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…

  14. 13 CFR 107.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...

  15. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  16. Characterizing graphs of maximum matching width at most 2

    DEFF Research Database (Denmark)

    Jeong, Jisu; Ok, Seongmin; Suh, Geewon

    2017-01-01

    The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...

  17. High-strength uranium-0.8 weight percent titanium alloy penetrators

    International Nuclear Information System (INIS)

    Northcutt, W.G.

    1978-09-01

    Long-rod kinetic-energy penetrators, produced from a uranium-0.8 titanium (U-0.8 Ti) alloy, are normally water quenched from the gamma phase (approximately 800 0 C) and aged to the desired hardness and strength levels. High cooling rates from 800 0 C in U-0.8 Ti alloy cylindrical bodies larger than about 13 mm in diameter cause internal voids, while slower rates of cooling can produce material that is unresponsive to aging. For the present study, elimination of quenching voids was of paramount importance; therefore, a process including the quenching of plate was explored. Vacuum-induction-cast ingots were forged and rolled into plate and cut into blanks from which the penetrators were obtained. Quenched U-0.8 Ti alloy blanks were aged at 350 to 500 0 C to determine the treatment that would provide maximum tensile and impact strengths. Both tensile and impact strengths were maximized by aging in vacuum for six hours at 450 0 C

  18. Forbush decreases and particle acceleration in the outer heliosphere

    International Nuclear Information System (INIS)

    Van Allen, J.A.; Mihalov, J.D.

    1990-01-01

    Major solar flare activity in 1989 has provided examples of the local acceleration of protons at 28 AU (Pioneer 11) and of the propagation of Forbush decreases in galactic cosmic ray intensity to a heliocentric radial distance of 47 AU (Pioneer 10). The combination of these and previous data at lesser distances shows (a) that Forbush decreases propagate with essentially constant magnitude to (at least) 47 AU and with similar magnitude at widely different ecliptic longitudes and (b) that the times for recovery from such decreases become progressively greater as the radial distance increases, being of the order of months in the outer heliosphere. A phenomenological scheme for (b) is proposed and fresh support is given to the hypothesis that the solar cycle modulation of the galactic cosmic ray intensity is attributable primarily to overlapping Forbush decreases which are more frequent and of greater magnitude near times of maximum solar activity than at times of lesser activity

  19. Development and Cross-Validation of Equation for Estimating Percent Body Fat of Korean Adults According to Body Mass Index

    Directory of Open Access Journals (Sweden)

    Hoyong Sung

    2017-06-01

    Full Text Available Background : Using BMI as an independent variable is the easiest way to estimate percent body fat. Thus far, few studies have investigated the development and cross-validation of an equation for estimating the percent body fat of Korean adults according to the BMI. The goals of this study were the development and cross-validation of an equation for estimating the percent fat of representative Korean adults using the BMI. Methods : Samples were obtained from the Korea National Health and Nutrition Examination Survey between 2008 and 2011. The samples from 2008-2009 and 2010-2011 were labeled as the validation group (n=10,624 and the cross-validation group (n=8,291, respectively. The percent fat was measured using dual-energy X-ray absorptiometry, and the body mass index, gender, and age were included as independent variables to estimate the measured percent fat. The coefficient of determination (R², standard error of estimation (SEE, and total error (TE were calculated to examine the accuracy of the developed equation. Results : The cross-validated R² was 0.731 for Model 1 and 0.735 for Model 2. The SEE was 3.978 for Model 1 and 3.951 for Model 2. The equations developed in this study are more accurate for estimating percent fat of the cross-validation group than those previously published by other researchers. Conclusion : The newly developed equations are comparatively accurate for the estimation of the percent fat of Korean adults.

  20. Savannah River Site Tank Cleaning: Corrosion Rate For One Versus Eight Percent Oxalic Acid Solution

    International Nuclear Information System (INIS)

    Ketusky, E.; Subramanian, K.

    2011-01-01

    Until recently, the use of oxalic acid for chemically cleaning the Savannah River Site (SRS) radioactive waste tanks focused on using concentrated 4 and 8-wt% solutions. Recent testing and research on applicable dissolution mechanisms have concluded that under appropriate conditions, dilute solutions of oxalic acid (i.e., 1-wt%) may be more effective. Based on the need to maximize cleaning effectiveness, coupled with the need to minimize downstream impacts, SRS is now developing plans for using a 1-wt% oxalic acid solution. A technology gap associated with using a 1-wt% oxalic acid solution was a dearth of suitable corrosion data. Assuming oxalic acid's passivation of carbon steel was proportional to the free oxalate concentration, the general corrosion rate (CR) from a 1-wt% solution may not be bound by those from 8-wt%. Therefore, after developing the test strategy and plan, the corrosion testing was performed. Starting with the envisioned process specific baseline solvent, a 1-wt% oxalic acid solution, with sludge (limited to Purex type sludge-simulant for this initial effort) at 75 C and agitated, the corrosion rate (CR) was determined from the measured weight loss of the exposed coupon. Environmental variations tested were: (a) Inclusion of sludge in the test vessel or assuming a pure oxalic acid solution; (b) acid solution temperature maintained at 75 or 45 C; and (c) agitation of the acid solution or stagnant. Application of select electrochemical testing (EC) explored the impact of each variation on the passivation mechanisms and confirmed the CR. The 1-wt% results were then compared to those from the 8-wt%. The immersion coupons showed that the maximum time averaged CR for a 1-wt% solution with sludge was less than 25-mils/yr for all conditions. For an agitated 8-wt% solution with sludge, the maximum time averaged CR was about 30-mils/yr at 50 C, and 86-mils/yr at 75 C. Both the 1-wt% and the 8-wt% testing demonstrated that if the sludge was removed from

  1. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  2. Validity of anthropometric procedures to estimate body density and body fat percent in military men

    Directory of Open Access Journals (Sweden)

    Ciro Romélio Rodriguez-Añez

    1999-12-01

    Full Text Available The objective of this study was to verify the validity of the Katch e McArdle’s equation (1973,which uses the circumferences of the arm, forearm and abdominal to estimate the body density and the procedure of Cohen (1986 which uses the circumferences of the neck and abdominal to estimate the body fat percent (%F in military men. Therefore data from 50 military men, with mean age of 20.26 ± 2.04 years serving in Santa Maria, RS, was collected. The circumferences were measured according with Katch e McArdle (1973 and Cohen (1986 procedures. The body density measured (Dm obtained under water weighting was used as criteria and its mean value was 1.0706 ± 0.0100 g/ml. The residual lung volume was estimated using the Goldman’s e Becklake’s equation (1959. The %F was obtained with the Siri’s equation (1961 and its mean value was 12.70 ± 4.71%. The validation criterion suggested by Lohman (1992 was followed. The analysis of the results indicated that the procedure developed by Cohen (1986 has concurrent validity to estimate %F in military men or in other samples with similar characteristics with standard error of estimate of 3.45%. . RESUMO Através deste estudo objetivou-se verificar a validade: da equação de Katch e McArdle (1973 que envolve os perímetros do braço, antebraço e abdômen, para estimar a densidade corporal; e, o procedimento de Cohen (1986 que envolve os perímetros do pescoço e abdômen, para estimar o % de gordura (%G; para militares. Para tanto, coletou-se os dados de 50 militares masculinos, com idade média de 20,26 ± 2,04 anos, lotados na cidade de Santa Maria, RS. Mensurou-se os perímetros conforme procedimentos de Katch e McArdle (1973 e Cohen (1986. Utilizou-se a densidade corporal mensurada (Dm através da pesagem hidrostática como critério de validação, cujo valor médio foi de 1,0706 ± 0,0100 g/ml. Estimou-se o volume residual pela equação de Goldman e Becklake (1959. O %G derivado da Dm estimou

  3. Modeling Mediterranean Ocean climate of the Last Glacial Maximum

    Directory of Open Access Journals (Sweden)

    U. Mikolajewicz

    2011-03-01

    Full Text Available A regional ocean general circulation model of the Mediterranean is used to study the climate of the Last Glacial Maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions complicated. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of salinity in the Mediterranean in spite of reduced net evaporation.

  4. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    Science.gov (United States)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  5. Solar Maximum Mission Experiment - Ultraviolet Spectroscopy and Polarimetry on the Solar Maximum Mission

    Science.gov (United States)

    Tandberg-Hanssen, E.; Cheng, C. C.; Woodgate, B. E.; Brandt, J. C.; Chapman, R. D.; Athay, R. G.; Beckers, J. M.; Bruner, E. C.; Gurman, J. B.; Hyder, C. L.

    1981-01-01

    The Ultraviolet Spectrometer and Polarimeter on the Solar Maximum Mission spacecraft is described. It is pointed out that the instrument, which operates in the wavelength range 1150-3600 A, has a spatial resolution of 2-3 arcsec and a spectral resolution of 0.02 A FWHM in second order. A Gregorian telescope, with a focal length of 1.8 m, feeds a 1 m Ebert-Fastie spectrometer. A polarimeter comprising rotating Mg F2 waveplates can be inserted behind the spectrometer entrance slit; it permits all four Stokes parameters to be determined. Among the observing modes are rasters, spectral scans, velocity measurements, and polarimetry. Examples of initial observations made since launch are presented.

  6. Effect of glycine, DL-alanine and DL-2-aminobutyric acid on the temperature of maximum density of water

    International Nuclear Information System (INIS)

    Romero, Carmen M.; Torres, Andres Felipe

    2015-01-01

    Highlights: • Effect of α-amino acids on the temperature of maximum density of water is presented. • The addition of α-amino acids decreases the temperature of maximum density of water. • Despretz constants suggest that the amino acids behave as water structure breakers. • Despretz constants decrease as the number of CH 2 groups of the amino acid increase. • Solute disrupting effect becomes smaller as its hydrophobic character increases. - Abstract: The effect of glycine, DL-alanine and DL-2-aminobutyric acid on the temperature of maximum density of water was determined from density measurements using a magnetic float densimeter. Densities of aqueous solutions were measured within the temperature range from T = (275.65 to 278.65) K at intervals of T = 0.50 K over the concentration range between (0.0300 and 0.1000) mol · kg −1 . A linear relationship between density and concentration was obtained for all the systems in the temperature range considered. The temperature of maximum density was determined from the experimental results. The effect of the three amino acids is to decrease the temperature of maximum density of water and the decrease is proportional to molality according to Despretz equation. The effect of the amino acids on the temperature of maximum density decreases as the number of methylene groups of the alkyl chain becomes larger. The results are discussed in terms of (solute + water) interactions and the effect of amino acids on water structure

  7. Standard values of maximum tongue pressure taken using newly developed disposable tongue pressure measurement device.

    Science.gov (United States)

    Utanohara, Yuri; Hayashi, Ryo; Yoshikawa, Mineka; Yoshida, Mitsuyoshi; Tsuga, Kazuhiro; Akagawa, Yasumasa

    2008-09-01

    It is clinically important to evaluate tongue function in terms of rehabilitation of swallowing and eating ability. We have developed a disposable tongue pressure measurement device designed for clinical use. In this study we used this device to determine standard values of maximum tongue pressure in adult Japanese. Eight hundred fifty-three subjects (408 male, 445 female; 20-79 years) were selected for this study. All participants had no history of dysphagia and maintained occlusal contact in the premolar and molar regions with their own teeth. A balloon-type disposable oral probe was used to measure tongue pressure by asking subjects to compress it onto the palate for 7 s with maximum voluntary effort. Values were recorded three times for each subject, and the mean values were defined as maximum tongue pressure. Although maximum tongue pressure was higher for males than for females in the 20-49-year age groups, there was no significant difference between males and females in the 50-79-year age groups. The maximum tongue pressure of the seventies age group was significantly lower than that of the twenties to fifties age groups. It may be concluded that maximum tongue pressures were reduced with primary aging. Males may become weaker with age at a faster rate than females; however, further decreases in strength were in parallel for male and female subjects.

  8. Comparsion of maximum viscosity and viscometric method for identification of irradiated sweet potato starch

    International Nuclear Information System (INIS)

    Yi, Sang Duk; Yang, Jae Seung

    2000-01-01

    A study was carried out to compare viscosity and maximum viscosity methods for the detection of irradiated sweet potato starch. The viscosity of all samples decreased by increasing stirring speeds and irradiation doses. This trend was similar for maximum viscosity. Regression coefficients and expressions of viscosity and maximum viscosity with increasing irradiation dose were 0.9823 (y=335.02e -0. 3 366x ) at 120 rpm and 0.9939 (y =-42.544x+730.26). This trend in viscosity was similar for all stirring speeds. Parameter A, B and C values showed a dose dependent relation and were a better parameter for detecting irradiation treatment than maximum viscosity and the viscosity value it self. These results suggest that the detection of irradiated sweet potato starch is possible by both the viscometric and maximum visosity method. Therefore, the authors think that the maximum viscosity method can be proposed as one of the new methods to detect the irradiation treatment for sweet potato starch

  9. A conductance maximum observed in an inward-rectifier potassium channel

    OpenAIRE

    1994-01-01

    One prediction of a multi-ion pore is that its conductance should reach a maximum and then begin to decrease as the concentration of permeant ion is raised equally on both sides of the membrane. A conductance maximum has been observed at the single-channel level in gramicidin and in a Ca(2+)-activated K+ channel at extremely high ion concentration (> 1,000 mM) (Hladky, S. B., and D. A. Haydon. 1972. Biochimica et Biophysica Acta. 274:294-312; Eisenmam, G., J. Sandblom, and E. Neher. 1977. In ...

  10. Anomalous maximum and minimum for the dissociation of a geminate pair in energetically disordered media

    Science.gov (United States)

    Govatski, J. A.; da Luz, M. G. E.; Koehler, M.

    2015-01-01

    We study the geminated pair dissociation probability φ as function of applied electric field and temperature in energetically disordered nD media. Regardless nD, for certain parameters regions φ versus the disorder degree (σ) displays anomalous minimum (maximum) at low (moderate) fields. This behavior is compatible with a transport energy which reaches a maximum and then decreases to negative values as σ increases. Our results explain the temperature dependence of the persistent photoconductivity in C60 single crystals going through order-disorder transitions. They also indicate how an energetic disorder spatial variation may contribute to higher exciton dissociation in multicomponent donor/acceptor systems.

  11. Maximum Smoke Temperature in Non-Smoke Model Evacuation Region for Semi-Transverse Tunnel Fire

    OpenAIRE

    B. Lou; Y. Qiu; X. Long

    2017-01-01

    Smoke temperature distribution in non-smoke evacuation under different mechanical smoke exhaust rates of semi-transverse tunnel fire were studied by FDS numerical simulation in this paper. The effect of fire heat release rate (10MW 20MW and 30MW) and exhaust rate (from 0 to 160m3/s) on the maximum smoke temperature in non-smoke evacuation region was discussed. Results show that the maximum smoke temperature in non-smoke evacuation region decreased with smoke exhaust rate. Plug-holing was obse...

  12. Instability of Reference Diameter in the Evaluation of Stenosis After Coronary Angioplasty: Percent Diameter Stenosis Overestimates Dilative Effects Due to Reference Diameter Reduction

    International Nuclear Information System (INIS)

    Hirami, Ryouichi; Iwasaki, Kohichiro; Kusachi, Shozo; Murakami, Takashi; Hina, Kazuyoshi; Matano, Shigeru; Murakami, Masaaki; Kita, Toshimasa; Sakakibara, Noburu; Tsuji, Takao

    2000-01-01

    Purpose: To examine changes in the reference segment luminal diameter after coronary angioplasty.Methods: Sixty-one patients with stable angina pectoris or old myocardial infarction were examined. Coronary angiograms were recorded before coronary angioplasty (pre-angioplasty) and immediately after (post-angioplasty), as well as 3 months after. Artery diameters were measured on cine-film using quantitative coronary angiographic analysis.Results: The diameters of the proximal segment not involved in the balloon inflation and segments in the other artery did not change significantly after angioplasty, but the reference segment diameter significantly decreased (4.7%). More than 10% luminal reduction was observed in seven patients (11%) and more than 5% reduction was observed in 25 patients (41%). More than 5% underestimation of the stenosis was observed in 22 patients (36%) when the post-angioplasty reference diameter was used as the reference diameter, compared with when the pre-angioplasty measurement was used and more than 10% underestimation was observed in five patients (8%).Conclusion: This study indicated that evaluation by percent diameter stenosis, with the reference diameter from immediately after angioplasty, overestimates the dilative effects of coronary angioplasty, and that it is thus better to evaluate the efficacy of angioplasty using the absolute diameter in addition to percent luminal stenosis

  13. A model for growth of beta-phase particles in zirconium-2.5 wt percent niobium

    International Nuclear Information System (INIS)

    Chow, C.K.; Liner, Y.; Rigby, G.L.

    1984-08-01

    The kinetics of the α → β phase change in Zr-2.5 percent Nb pressure-tube material at constant temperature have been studied. The volume-fraction change of the β phase due to diffusion in an infinite α-phase matrix was considered, and a mathematical model with a numerical solution was developed to predict the transient spherical growth of the β-phase region. This model has been applied to Zr-2.5 wt percent Nb, and the calculated results compared to experiment

  14. Study of extraterrestrial disposal of radioactive wastes. Part 3: Preliminary feasibility screening study of space disposal of the actinide radioactive wastes with 1 percent and 0.1 percent fission product contamination

    Science.gov (United States)

    Hyland, R. E.; Wohl, M. L.; Finnegan, P. M.

    1973-01-01

    A preliminary study was conducted of the feasibility of space disposal of the actinide class of radioactive waste material. This waste was assumed to contain 1 and 0.1 percent residual fission products, since it may not be feasible to completely separate the actinides. The actinides are a small fraction of the total waste but they remain radioactive much longer than the other wastes and must be isolated from human encounter for tens of thousands of years. Results indicate that space disposal is promising but more study is required, particularly in the area of safety. The minimum cost of space transportation would increase the consumer electric utility bill by the order of 1 percent for earth escape and 3 percent for solar escape. The waste package in this phase of the study was designed for normal operating conditions only; the design of next phase of the study will include provisions for accident safety. The number of shuttle launches per year required to dispose of all U.S. generated actinide waste with 0.1 percent residual fission products varies between 3 and 15 in 1985 and between 25 and 110 by 2000. The lower values assume earth escape (solar orbit) and the higher values are for escape from the solar system.

  15. Microstructural Evolution of Al-1Fe (Weight Percent) Alloy During Accumulative Continuous Extrusion Forming

    Science.gov (United States)

    Wang, Xiang; Guan, Ren-Guo; Tie, Di; Shang, Ying-Qiu; Jin, Hong-Mei; Li, Hong-Chao

    2018-04-01

    As a new microstructure refining method, accumulative continuous extrusion forming (ACEF) cannot only refine metal matrix but also refine the phases that exist in it. In order to detect the refinements of grain and second phase during the process, Al-1Fe (wt pct) alloy was processed by ACEF, and the microstructural evolution was analyzed by electron backscatter diffraction (EBSD) and transmission electron microscopy (TEM). Results revealed that the average grain size of Al-1Fe (wt pct) alloy decreased from 13 to 1.2 μm, and blocky Al3Fe phase with an average length of 300 nm was granulated to Al3Fe particle with an average diameter of 200 nm, after one pass of ACEF. Refinement of grain was attributed to continuous dynamic recrystallization (CDRX), and the granulation of Al3Fe phase included the spheroidization resulting from deformation heat and the fragmentation caused by the coupling effects of strain and thermal effect. The spheroidization worked in almost the entire deformation process, while the fragmentation required strain accumulation. However, fragmentation contributed more than spheroidization. Al3Fe particle stimulated the formation of substructure and retarded the migration of recrystallized grain boundary, but the effect of Al3Fe phase on refinement of grain could only be determined by the contrastive investigation of Al-1Fe (wt pct) alloy and pure Al.

  16. Microprocessor Controlled Maximum Power Point Tracker for Photovoltaic Application

    International Nuclear Information System (INIS)

    Jiya, J. D.; Tahirou, G.

    2002-01-01

    This paper presents a microprocessor controlled maximum power point tracker for photovoltaic module. Input current and voltage are measured and multiplied within the microprocessor, which contains an algorithm to seek the maximum power point. The duly cycle of the DC-DC converter, at which the maximum power occurs is obtained, noted and adjusted. The microprocessor constantly seeks for improvement of obtained power by varying the duty cycle

  17. Asymptomatic endemic Chlamydia pecorum infections reduce growth rates in calves by up to 48 percent.

    Directory of Open Access Journals (Sweden)

    Anil Poudel

    Full Text Available Intracellular Chlamydia (C. bacteria cause in cattle some acute but rare diseases such as abortion, sporadic bovine encephalomyelitis, kerato-conjunctivitis, pneumonia, enteritis and polyarthritis. More frequent, essentially ubiquitous worldwide, are low-level, asymptomatic chlamydial infections in cattle. We investigated the impact of these naturally acquired infections in a cohort of 51 female Holstein and Jersey calves from birth to 15 weeks of age. In biweekly sampling, we measured blood/plasma markers of health and infection and analyzed their association with clinical appearance and growth in dependence of chlamydial infection intensity as determined by mucosal chlamydial burden or contemporaneous anti-chlamydial plasma IgM. Chlamydia 23S rRNA gene PCR and ompA genotyping identified only C. pecorum (strains 1710S, Maeda, and novel strain Smith3v8 in conjunctival and vaginal swabs. All calves acquired the infection but remained clinically asymptomatic. High chlamydial infection associated with reduction of body weight gains by up to 48% and increased conjunctival reddening (P<10(-4. Simultaneously decreased plasma albumin and increased globulin (P<10(-4 suggested liver injury by inflammatory mediators as mechanisms for the growth inhibition. This was confirmed by the reduction of plasma insulin like growth factor-1 at high chlamydial infection intensity (P<10(-4. High anti-C. pecorum IgM associated eight weeks later with 66% increased growth (P = 0.027, indicating a potential for immune protection from C. pecorum-mediated growth depression. The worldwide prevalence of chlamydiae in livestock and their high susceptibility to common feed-additive antibiotics suggests the possibility that suppression of chlamydial infections may be a major contributor to the growth promoting effect of feed-additive antibiotics.

  18. MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods

    Science.gov (United States)

    Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir

    2011-01-01

    Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353

  19. Troubleshooting at Reverse Osmosis performance decrease

    Energy Technology Data Exchange (ETDEWEB)

    Soons, Jan [KEMA (Netherlands)

    2011-07-01

    There are several causes for a decrease in Reverse Osmosis (RO) membrane performance each of which requiring actions to tackle the possible cause. Two of the main factors affecting the performance of the system are the feed quality (poor feed quality can lead to fouling of the membranes) and the operational conditions (including the maximum allowed pressure, minimum cleaning frequencies and types, recovery rate etc, which should be according to the design conditions). If necessary, pre-treatment will be applied in order to remove the fouling agents from the influent, reduce scaling (through the addition of anti-scalants) and for the protection of the membranes (for example, sodium metabisulphite addition for the removal of residual chlorine which can harm the membranes). Fouling is not strictly limited to the use of surface water as feed water, also relatively clean water sources will, over time, lead to organic and inorganic fouling when cleaning is not optimum. When fouling occurs, the TransMembrane Pressure (TMP) increases and more energy will be needed to produce the same amount of product water. Also, the cleaning rate will increase, reducing the production rate and increasing the chemical consumption and the produced waste streams. Furthermore, the quality of the effluent will decrease (lower rejection rates at higher pressures) and the lifetime of the membranes will decrease. Depending on the type of fouling different cleaning regimes will have to be applied: acidic treatment for inorganic fouling, the addition of bases against organic fouling. Therefore, it is very important to have a clear view of the type of fouling that is occurring, in order to apply the correct treatment methods. Another important aspect to be kept in mind is that the chemistry of the water - in the first place ruled by the feed water composition - can change during passage of the modules, in particular in cases where the RO system consists of two or more RO trains, and where the

  20. Abnormal histopathology, fat percent and hepatic apolipoprotein A I and apolipoprotein B100 mRNA expression in fatty liver hemorrhagic syndrome and their improvement by soybean lecithin.

    Science.gov (United States)

    Song, Yalu; Ruan, Jiming; Luo, Junrong; Wang, Tiancheng; Yang, Fei; Cao, Huabin; Huang, Jianzhen; Hu, Guoliang

    2017-10-01

    To investigate the etiopathogenesis of fatty liver hemorrhagic syndrome (FLHS) and the protective effects of soybean lecithin against FLHS in laying hens, 135 healthy 300-day-old Hyline laying hens were randomly divided into groups: control (group 1), diseased (group 2), and protected (group 3). Each group contained 45 layers with 3 replicates. The birds in these 3 groups were fed a control diet, a high-energy/low-protein (HELP) diet or the HELP diet supplemented with 3% soybean lecithin instead of maize. The fat percent in the liver was calculated. Histopathological changes in the liver were determined by staining, and the mRNA expression levels of apolipoproteinA I (apoA I) and apolipoprotein B100 (apoB100) in the liver were determined by RT-PCR. The results showed that the fat percent in the liver of group 2 was much higher (P steatosis in the liver cell on d 30 and 60. The mRNA expression levels of apoA I and apoB100 in the livers were variable throughout the experiment. The expression level of apoA I in group 2 significantly decreased on d 60 (P < 0.05); the expression level of apoB100 slightly increased on d 30 in group 2, while it sharply decreased on d 60. Compared to group 1, the expression level of apoB100 showed no significant difference in group 3 (P < 0.05). This study indicated that FLHS induced pathological changes and abnormal expression of apoA I and apoB100 in the livers of laying hens and that soybean lecithin alleviated these abnormal changes. © 2017 Poultry Science Association Inc.

  1. 40 CFR 63.8055 - How do I comply with a weight percent HAP limit in coating products?

    Science.gov (United States)

    2010-07-01

    ... HAP limit in coating products? 63.8055 Section 63.8055 Protection of Environment ENVIRONMENTAL...: Miscellaneous Coating Manufacturing Alternative Means of Compliance § 63.8055 How do I comply with a weight percent HAP limit in coating products? (a) As an alternative to complying with the requirements in Table 1...

  2. 26 CFR 1.46-8 - Requirements for taxpayers electing additional one-percent investment credit (TRASOP's).

    Science.gov (United States)

    2010-04-01

    ... conversion price which is no greater than the fair market value of that common stock at the time the plan... subject to a limitation, then stock representing at least 50 percent of the fair market value of the... common stock, Class A and Class B. Their fair market values per share are $1 and $.50, respectively, and...

  3. Objectively-determined intensity- and domain-specific physical activity and sedentary behavior in relation to percent body fat.

    Science.gov (United States)

    Scheers, Tineke; Philippaerts, Renaat; Lefevre, Johan

    2013-12-01

    This study examined the independent and joint associations of overall, intensity-specific and domain-specific physical activity and sedentary behavior with bioelectrical impedance-determined percent body fat. Physical activity was measured in 442 Flemish adults (41.4 ± 9.8 years) using the SenseWear Armband and an electronic diary. Two-way analyses of covariance investigated the interaction of physical activity and sedentary behavior with percent body fat. Multiple linear regression analyses, adjusted for potential confounders, examined the associations of intensity-specific and domain-specific physical activity and sedentary behavior with percent body fat. Results showed a significant main effect for physical activity in both genders and for sedentary behavior in women, but no interaction effects. Light activity was positively (β = 0.41 for men and 0.43 for women) and moderate (β = -0.64 and -0.41), vigorous (β = -0.21 and -0.24) and moderate-to-vigorous physical activity (MVPA) inversely associated with percent body fat, independent of sedentary time. Regarding domain-specific physical activity, significant associations were present for occupation, leisure time and household chores, irrespective of sedentary time. The positive associations between body fat and total and domain-specific sedentary behavior diminished after MVPA was controlled for. MVPA during leisure time, occupation and household chores may be essential to prevent fat gain. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  4. Math Academy: Dining Out! Explorations in Fractions, Decimals, & Percents. Book 4: Supplemental Math Materials for Grades 3-8

    Science.gov (United States)

    Rimbey, Kimberly

    2007-01-01

    Created by teachers for teachers, the Math Academy tools and activities included in this booklet were designed to create hands-on activities and a fun learning environment for the teaching of mathematics to the students. This booklet contains the "Math Academy--Dining Out! Explorations in Fractions, Decimals, and Percents," which teachers can use…

  5. 7 CFR 205.303 - Packaged products labeled “100 percent organic” or “organic.”

    Science.gov (United States)

    2010-01-01

    ..., verifying organic certification of the operations producing such ingredients, and: Provided further, That... (CONTINUED) ORGANIC FOODS PRODUCTION ACT PROVISIONS NATIONAL ORGANIC PROGRAM Labels, Labeling, and Market Information § 205.303 Packaged products labeled “100 percent organic” or “organic.” (a) Agricultural products...

  6. 17 CFR 210.3-09 - Separate financial statements of subsidiaries not consolidated and 50 percent or less owned persons.

    Science.gov (United States)

    2010-04-01

    ... financial statements of subsidiaries not consolidated and 50 percent or less owned persons. (a) If any of... consolidated financial statements required by §§ 210.3-01 and 3-02. However, these separate financial... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false Separate financial statements...

  7. Impact of soil moisture on extreme maximum temperatures in Europe

    Directory of Open Access Journals (Sweden)

    Kirien Whan

    2015-09-01

    Full Text Available Land-atmosphere interactions play an important role for hot temperature extremes in Europe. Dry soils may amplify such extremes through feedbacks with evapotranspiration. While previous observational studies generally focused on the relationship between precipitation deficits and the number of hot days, we investigate here the influence of soil moisture (SM on summer monthly maximum temperatures (TXx using water balance model-based SM estimates (driven with observations and temperature observations. Generalized extreme value distributions are fitted to TXx using SM as a covariate. We identify a negative relationship between SM and TXx, whereby a 100 mm decrease in model-based SM is associated with a 1.6 °C increase in TXx in Southern-Central and Southeastern Europe. Dry SM conditions result in a 2–4 °C increase in the 20-year return value of TXx compared to wet conditions in these two regions. In contrast with SM impacts on the number of hot days (NHD, where low and high surface-moisture conditions lead to different variability, we find a mostly linear dependency of the 20-year return value on surface-moisture conditions. We attribute this difference to the non-linear relationship between TXx and NHD that stems from the threshold-based calculation of NHD. Furthermore the employed SM data and the Standardized Precipitation Index (SPI are only weakly correlated in the investigated regions, highlighting the importance of evapotranspiration and runoff for resulting SM. Finally, in a case study for the hot 2003 summer we illustrate that if 2003 spring conditions in Southern-Central Europe had been as dry as in the more recent 2011 event, temperature extremes in summer would have been higher by about 1 °C, further enhancing the already extreme conditions which prevailed in that year.

  8. 49 CFR 195.406 - Maximum operating pressure.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...

  9. 78 FR 49370 - Inflation Adjustment of Maximum Forfeiture Penalties

    Science.gov (United States)

    2013-08-14

    ... ``civil monetary penalties provided by law'' at least once every four years. DATES: Effective September 13... increases the maximum civil monetary forfeiture penalties available to the Commission under its rules... maximum civil penalties established in that section to account for inflation since the last adjustment to...

  10. 22 CFR 201.67 - Maximum freight charges.

    Science.gov (United States)

    2010-04-01

    ..., commodity rate classification, quantity, vessel flag category (U.S.-or foreign-flag), choice of ports, and... the United States. (2) Maximum charter rates. (i) USAID will not finance ocean freight under any... owner(s). (4) Maximum liner rates. USAID will not finance ocean freight for a cargo liner shipment at a...

  11. Maximum penetration level of distributed generation without violating voltage limits

    NARCIS (Netherlands)

    Morren, J.; Haan, de S.W.H.

    2009-01-01

    Connection of Distributed Generation (DG) units to a distribution network will result in a local voltage increase. As there will be a maximum on the allowable voltage increase, this will limit the maximum allowable penetration level of DG. By reactive power compensation (by the DG unit itself) a

  12. Particle Swarm Optimization Based of the Maximum Photovoltaic ...

    African Journals Online (AJOL)

    Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency. In this work, a Particle Swarm ...

  13. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  14. Application of maximum entropy to neutron tunneling spectroscopy

    International Nuclear Information System (INIS)

    Mukhopadhyay, R.; Silver, R.N.

    1990-01-01

    We demonstrate the maximum entropy method for the deconvolution of high resolution tunneling data acquired with a quasielastic spectrometer. Given a precise characterization of the instrument resolution function, a maximum entropy analysis of lutidine data obtained with the IRIS spectrometer at ISIS results in an effective factor of three improvement in resolution. 7 refs., 4 figs

  15. 32 CFR 842.35 - Depreciation and maximum allowances.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide” to...

  16. The maximum significant wave height in the Southern North Sea

    NARCIS (Netherlands)

    Bouws, E.; Tolman, H.L.; Holthuijsen, L.H.; Eldeberky, Y.; Booij, N.; Ferier, P.

    1995-01-01

    The maximum possible wave conditions along the Dutch coast, which seem to be dominated by the limited water depth, have been estimated in the present study with numerical simulations. Discussions with meteorologists suggest that the maximum possible sustained wind speed in North Sea conditions is

  17. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    OpenAIRE

    Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...

  18. 5 CFR 838.711 - Maximum former spouse survivor annuity.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the amount...

  19. Maximum physical capacity testing in cancer patients undergoing chemotherapy

    DEFF Research Database (Denmark)

    Knutsen, L.; Quist, M; Midtgaard, J

    2006-01-01

    BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...

  20. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  1. Improvement of maximum power point tracking perturb and observe algorithm for a standalone solar photovoltaic system

    International Nuclear Information System (INIS)

    Awan, M.M.A.; Awan, F.G.

    2017-01-01

    Extraction of maximum power from PV (Photovoltaic) cell is necessary to make the PV system efficient. Maximum power can be achieved by operating the system at MPP (Maximum Power Point) (taking the operating point of PV panel to MPP) and for this purpose MPPT (Maximum Power Point Trackers) are used. There are many tracking algorithms/methods used by these trackers which includes incremental conductance, constant voltage method, constant current method, short circuit current method, PAO (Perturb and Observe) method, and open circuit voltage method but PAO is the mostly used algorithm because it is simple and easy to implement. PAO algorithm has some drawbacks, one is low tracking speed under rapid changing weather conditions and second is oscillations of PV systems operating point around MPP. Little improvement is achieved in past papers regarding these issues. In this paper, a new method named 'Decrease and Fix' method is successfully introduced as improvement in PAO algorithm to overcome these issues of tracking speed and oscillations. Decrease and fix method is the first successful attempt with PAO algorithm for stability achievement and speeding up of tracking process in photovoltaic system. Complete standalone photovoltaic system's model with improved perturb and observe algorithm is simulated in MATLAB Simulink. (author)

  2. Theoretical and experimental investigations of the limits to the maximum output power of laser diodes

    International Nuclear Information System (INIS)

    Wenzel, H; Crump, P; Pietrzak, A; Wang, X; Erbert, G; Traenkle, G

    2010-01-01

    The factors that limit both the continuous wave (CW) and the pulsed output power of broad-area laser diodes driven at very high currents are investigated theoretically and experimentally. The decrease in the gain due to self-heating under CW operation and spectral holeburning under pulsed operation, as well as heterobarrier carrier leakage and longitudinal spatial holeburning, are the dominant mechanisms limiting the maximum achievable output power.

  3. Maximum on the electrical conductivity polytherm of molten TeCl{sub 4}

    Energy Technology Data Exchange (ETDEWEB)

    Salyulev, Alexander B.; Potapov, Alexei M. [Russian Academy of Sciences, Ekaterinburg (Russian Federation). Inst. of High-Temperature Electrochemistry

    2017-09-01

    The electrical conductivity of molten TeCl{sub 4} was measured up to 761 K, i.e. 106 degrees above the normal boiling point of the salt. For the first time it was found that TeCl{sub 4} electrical conductivity polytherm has a maximum. It was recorded at 705 K (Κ{sub max}=0.245 Sm/cm), whereupon the conductivity decreases as the temperature rises. The activation energy of electrical conductivity was calculated.

  4. Maximum on the electrical conductivity polytherm of molten TeCl4

    International Nuclear Information System (INIS)

    Salyulev, Alexander B.; Potapov, Alexei M.

    2017-01-01

    The electrical conductivity of molten TeCl 4 was measured up to 761 K, i.e. 106 degrees above the normal boiling point of the salt. For the first time it was found that TeCl 4 electrical conductivity polytherm has a maximum. It was recorded at 705 K (Κ max =0.245 Sm/cm), whereupon the conductivity decreases as the temperature rises. The activation energy of electrical conductivity was calculated.

  5. Critical Assessment of the Surface Tension determined by the Maximum Pressure Bubble Method

    OpenAIRE

    Benedetto, Franco Emmanuel; Zolotucho, Hector; Prado, Miguel Oscar

    2015-01-01

    The main factors that influence the value of surface tension of a liquid measured with the Maximum Pressure Bubble Method are critically evaluated. We present experimental results showing the effect of capillary diameter, capillary depth, bubble spheroidicity and liquid density at room temperature. We show that the decrease of bubble spheroidicity due to increase of capillary immersion depth is not sufficient to explain the deviations found in the measured surface tension values. Thus, we pro...

  6. Bounds and maximum principles for the solution of the linear transport equation

    International Nuclear Information System (INIS)

    Larsen, E.W.

    1981-01-01

    Pointwise bounds are derived for the solution of time-independent linear transport problems with surface sources in convex spatial domains. Under specified conditions, upper bounds are derived which, as a function of position, decrease with distance from the boundary. Also, sufficient conditions are obtained for the existence of maximum and minimum principles, and a counterexample is given which shows that such principles do not always exist

  7. Decreased shoulder function and pain common in recreational badminton players.

    Science.gov (United States)

    Fahlström, M; Söderman, K

    2007-06-01

    The aim of this study was to describe the prevalence and consequences of painful conditions in the shoulder region in recreational badminton players. A questionnaire study was performed on 99 players, of whom 57 were also assessed with Constant score. Previous or present pain in the dominant shoulder was reported by 52% of the players. Sixteen percent of the players had on-going shoulder pain associated with badminton play. A majority of these players reported that their training habits were affected by the pain. Total Constant score was lower in the painful shoulders. Furthermore, range of active pain-free shoulder abduction was decreased. However, isometric shoulder strength test showed no differences when compared with pain-free shoulders. Even though the pain caused functional problems, the players were still playing with on-going symptoms. The diagnoses were mostly unknown, although history and clinical tests indicate problems resembling subacromial impingement.

  8. 78 FR 9845 - Minimum and Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for a Violation of...

    Science.gov (United States)

    2013-02-12

    ... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...

  9. Comparative Study of Regional Estimation Methods for Daily Maximum Temperature (A Case Study of the Isfahan Province

    Directory of Open Access Journals (Sweden)

    Ghamar Fadavi

    2016-02-01

    data (24 days for each year and 2 days for each month were used for different interpolation methods. Using difference measures viz. Root Mean Square Error (RMSE, Mean Bias Error (MBE, Mean Absolute Error (MAE and Correlation Coefficient (r, the performance and accuracy of each model were tested to select the best method. Results and Discussion: The assessment of normalizing condition of data was done using Kolmogrov-Smirnov test at ninety five percent (95% level of significance in Mini Tab software. The results show that distribution of daily maximum temperature data had no significant difference with normal distribution for both years. Weighed inverse distance method used for estimation daily maximum temperature, for this purpose, root mean square error (RMSE for different status of power (1 to 5 and number of station (5,10,15 and20 was calculated. According to the minimum RMSE, power for 2 and number of station for 15 in 2007 and power for 1 and number of station for 5 in 1992 were obtained as optimum power and number of station. The results also show that in regression equation the correlation coefficient were more than 0.8 for the most of the days. The regression coefficient of elevation (h and latitude (y were almost negative for all the month and the regression coefficient of longitude (x was positive, showing that decreasing temperature with increasing elevation and increasing temperature with increasing longitude. The results revealed that for Kriging method the Gussian model had the best semivariogram and after that spherical and exponential were in the next order, respectively for 2007 year. In the year 1992, spherical and Gussian models had better semivariogram among others. Elevation was the best variable to improve Co-kriging method as auxiliary data. such that The correlation coefficient between temperature and elevation was more than 0.5 for all days. The results also show that for Co-Kriging method the spherical model had the best semivariogram and

  10. The Influence of Red Fruit Oil on Creatin Kinase Level at Maximum Physical Activity

    Science.gov (United States)

    Apollo Sinaga, Fajar; Hotliber Purba, Pangondian

    2018-03-01

    Heavy physical activities can cause the oxidative stress which resulting in muscle damage with an indicator of elevated levels of Creatin Kinase (CK) enzyme. The oxidative stress can be prevented or reduced by antioxidant supplementation. One of natural resources which contain antioxidant is Red Fruit (Pandanus conoideus) Oil (RFO). This study aims to see the effect of Red Fruit Oil on Creatin Kinase (CK) level at maximum physical activity. This study is an experimental research by using the design of randomized control group pretest-posttest. This study was using 24 male mice divided into four groups, the control group was given aquadest, the treatment groups P1, P2, and P3 were given the RFO orally of 0.15 ml/kgBW, 0.3 ml/kgBW, and 0.6 ml/kgBW, respectively, for a month. The level of CK was checked for all groups at the beginning of study and after the maximum physical activity. The obtained data were then tested statistically by using t-test and ANOVA. The result shows the RFO supplementation during exercise decreased the CK level in P1, P2, and P3 groups with p<0.05, and the higher RFO dosage resulted in decreased CK level at p<0.05. The conclusion of this study is the Red Fruit Oil could decrease the level of CK at maximum physical activity.

  11. The power and robustness of maximum LOD score statistics.

    Science.gov (United States)

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  12. Parameters determining maximum wind velocity in a tropical cyclone

    International Nuclear Information System (INIS)

    Choudhury, A.M.

    1984-09-01

    The spiral structure of a tropical cyclone was earlier explained by a tangential velocity distribution which varies inversely as the distance from the cyclone centre outside the circle of maximum wind speed. The case has been extended in the present paper by adding a radial velocity. It has been found that a suitable combination of radial and tangential velocities can account for the spiral structure of a cyclone. This enables parametrization of the cyclone. Finally a formula has been derived relating maximum velocity in a tropical cyclone with angular momentum, radius of maximum wind speed and the spiral angle. The shapes of the spirals have been computed for various spiral angles. (author)

  13. Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)

    Data.gov (United States)

    NSGIC Education | GIS Inventory — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...

  14. Probabilistic maximum-value wind prediction for offshore environments

    DEFF Research Database (Denmark)

    Staid, Andrea; Pinson, Pierre; Guikema, Seth D.

    2015-01-01

    statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...... the full probabilistic distribution of maximum wind speed. Knowledge of the maximum wind speed for an offshore location within a given period can inform decision-making regarding turbine operations, planned maintenance operations and power grid scheduling in order to improve safety and reliability...

  15. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....

  16. Parametric optimization of thermoelectric elements footprint for maximum power generation

    DEFF Research Database (Denmark)

    Rezania, A.; Rosendahl, Lasse; Yin, Hao

    2014-01-01

    The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost......-performance, and variation of efficiency in the uni-couple over a wide range of the heat transfer coefficient on the cold junction. The three-dimensional (3D) governing equations of the thermoelectricity and the heat transfer are solved using the finite element method (FEM) for temperature dependent properties of TE...... materials. The results, which are in good agreement with the previous computational studies, show that the maximum power generation and the maximum cost-performance in the module occur at An/Ap

  17. Ethylene Production Maximum Achievable Control Technology (MACT) Compliance Manual

    Science.gov (United States)

    This July 2006 document is intended to help owners and operators of ethylene processes understand and comply with EPA's maximum achievable control technology standards promulgated on July 12, 2002, as amended on April 13, 2005 and April 20, 2006.

  18. ORIGINAL ARTICLES Surgical practice in a maximum security prison

    African Journals Online (AJOL)

    Prison Clinic, Mangaung Maximum Security Prison, Bloemfontein. F Kleinhans, BA (Cur) .... HIV positivity rate and the use of the rectum to store foreign objects. ... fruit in sunlight. Other positive health-promoting factors may also play a role,.

  19. A technique for estimating maximum harvesting effort in a stochastic ...

    Indian Academy of Sciences (India)

    Unknown

    Estimation of maximum harvesting effort has a great impact on the ... fluctuating environment has been developed in a two-species competitive system, which shows that under realistic .... The existence and local stability properties of the equi-.

  20. Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...

  1. Post optimization paradigm in maximum 3-satisfiability logic programming

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.

  2. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  3. Encoding Strategy for Maximum Noise Tolerance Bidirectional Associative Memory

    National Research Council Canada - National Science Library

    Shen, Dan

    2003-01-01

    In this paper, the Basic Bidirectional Associative Memory (BAM) is extended by choosing weights in the correlation matrix, for a given set of training pairs, which result in a maximum noise tolerance set for BAM...

  4. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.

    2012-01-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous

  5. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  6. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  7. Maximum organic carbon limits at different melter feed rates (U)

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    This report documents the results of a study to assess the impact of varying melter feed rates on the maximum total organic carbon (TOC) limits allowable in the DWPF melter feed. Topics discussed include: carbon content; feed rate; feed composition; melter vapor space temperature; combustion and dilution air; off-gas surges; earlier work on maximum TOC; overview of models; and the results of the work completed

  8. A tropospheric ozone maximum over the equatorial Southern Indian Ocean

    Directory of Open Access Journals (Sweden)

    L. Zhang

    2012-05-01

    Full Text Available We examine the distribution of tropical tropospheric ozone (O3 from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O3 during 2005 to 2009 reveal a distinct, persistent O3 maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O3 observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O3 maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O3 maximum is dominated by the O3 production driven by lightning nitrogen oxides (NOx emissions, which accounts for 62% of the tropospheric column O3 in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O3 maximum are rather small. The O3 productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O3 maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O3 maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.

  9. Dinosaur Metabolism and the Allometry of Maximum Growth Rate

    OpenAIRE

    Myhrvold, Nathan P.

    2016-01-01

    The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth...

  10. MAXIMUM PRINCIPLE FOR SUBSONIC FLOW WITH VARIABLE ENTROPY

    Directory of Open Access Journals (Sweden)

    B. Sizykh Grigory

    2017-01-01

    Full Text Available Maximum principle for subsonic flow is fair for stationary irrotational subsonic gas flows. According to this prin- ciple, if the value of the velocity is not constant everywhere, then its maximum is achieved on the boundary and only on the boundary of the considered domain. This property is used when designing form of an aircraft with a maximum critical val- ue of the Mach number: it is believed that if the local Mach number is less than unit in the incoming flow and on the body surface, then the Mach number is less then unit in all points of flow. The known proof of maximum principle for subsonic flow is based on the assumption that in the whole considered area of the flow the pressure is a function of density. For the ideal and perfect gas (the role of diffusion is negligible, and the Mendeleev-Clapeyron law is fulfilled, the pressure is a function of density if entropy is constant in the entire considered area of the flow. Shows an example of a stationary sub- sonic irrotational flow, in which the entropy has different values on different stream lines, and the pressure is not a function of density. The application of the maximum principle for subsonic flow with respect to such a flow would be unreasonable. This example shows the relevance of the question about the place of the points of maximum value of the velocity, if the entropy is not a constant. To clarify the regularities of the location of these points, was performed the analysis of the com- plete Euler equations (without any simplifying assumptions in 3-D case. The new proof of the maximum principle for sub- sonic flow was proposed. This proof does not rely on the assumption that the pressure is a function of density. Thus, it is shown that the maximum principle for subsonic flow is true for stationary subsonic irrotational flows of ideal perfect gas with variable entropy.

  11. On semidefinite programming relaxations of maximum k-section

    NARCIS (Netherlands)

    de Klerk, E.; Pasechnik, D.V.; Sotirov, R.; Dobre, C.

    2012-01-01

    We derive a new semidefinite programming bound for the maximum k -section problem. For k=2 (i.e. for maximum bisection), the new bound is at least as strong as a well-known bound by Poljak and Rendl (SIAM J Optim 5(3):467–487, 1995). For k ≥ 3the new bound dominates a bound of Karisch and Rendl

  12. Direct maximum parsimony phylogeny reconstruction from genotype data

    OpenAIRE

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-01-01

    Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of ge...

  13. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  14. Influence of aliphatic amides on the temperature of maximum density of water

    International Nuclear Information System (INIS)

    Torres, Andrés Felipe; Romero, Carmen M.

    2017-01-01

    Highlights: • The addition of amides decreases the temperature of maximum density of water suggesting a disruptive effect on water structure. • The amides in aqueous solution do not follow the Despretz equation in the concentration range considered. • The temperature shift Δθ as a function of molality is represented by a second order equation. • The Despretz constants were determined considering the dilute concentration region for each amide solution. • Solute disrupting effect of amides becomes smaller as its hydrophobic character increases. - Abstract: The influence of dissolved substances on the temperature of the maximum density of water has been studied in relation to their effect on water structure as they can change the equilibrium between structured and unstructured species of water. However, most work has been performed using salts and the studies with small organic solutes such as amides are scarce. In this work, the effect of acetamide, propionamide and butyramide on the temperature of maximum density of water was determined from density measurements using a magnetic float densimeter. Densities of aqueous solutions were measured within the temperature range from T = (275.65–278.65) K at intervals of 0.50 K in the concentration range between (0.10000 and 0.80000) mol·kg −1 . The temperature of maximum density was determined from the experimental results. The effect of the three amides is to decrease the temperature of maximum density of water and the change does not follow the Despretz equation. The results are discussed in terms of solute-water interactions and the disrupting effect of amides on water structure.

  15. CO2 maximum in the oxygen minimum zone (OMZ

    Directory of Open Access Journals (Sweden)

    V. Garçon

    2011-02-01

    Full Text Available Oxygen minimum zones (OMZs, known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2−1 in the open ocean. To achieve this, we examine simultaneous DIC and O2 data collected off Chile during 4 cruises (2000–2002 and a monthly monitoring (2000–2001 in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg−1, up to 2350 μmol kg−1 have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ. Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%, meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios. This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect and occurring upstream in warm waters (e.g., in the Equatorial Divergence

  16. CO2 maximum in the oxygen minimum zone (OMZ)

    Science.gov (United States)

    Paulmier, A.; Ruiz-Pino, D.; Garçon, V.

    2011-02-01

    Oxygen minimum zones (OMZs), known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG) more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. We present here the dissolved inorganic carbon (DIC) structure, associated locally with the Chilean OMZ and globally with the main most intense OMZs (O2Chile during 4 cruises (2000-2002) and a monthly monitoring (2000-2001) in one of the shallowest OMZs, along with international DIC and O2 databases and climatology for other OMZs. High DIC concentrations (>2225 μmol kg-1, up to 2350 μmol kg-1) have been reported over the whole OMZ thickness, allowing the definition for all studied OMZs a Carbon Maximum Zone (CMZ). Locally off Chile, the shallow cores of the OMZ and CMZ are spatially and temporally collocated at 21° S, 30° S and 36° S despite different cross-shore, long-shore and seasonal configurations. Globally, the mean state of the main OMZs also corresponds to the largest carbon reserves of the ocean in subsurface waters. The CMZs-OMZs could then induce a positive feedback for the atmosphere during upwelling activity, as potential direct local sources of CO2. The CMZ paradoxically presents a slight "carbon deficit" in its core (~10%), meaning a DIC increase from the oxygenated ocean to the OMZ lower than the corresponding O2 decrease (assuming classical C/O molar ratios). This "carbon deficit" would be related to regional thermal mechanisms affecting faster O2 than DIC (due to the carbonate buffer effect) and occurring upstream in warm waters (e.g., in the Equatorial Divergence), where the CMZ-OMZ core originates. The "carbon deficit" in the CMZ core would be mainly compensated locally at the

  17. Maximum spectral demands in the near-fault region

    Science.gov (United States)

    Huang, Yin-Nan; Whittaker, Andrew S.; Luco, Nicolas

    2008-01-01

    The Next Generation Attenuation (NGA) relationships for shallow crustal earthquakes in the western United States predict a rotated geometric mean of horizontal spectral demand, termed GMRotI50, and not maximum spectral demand. Differences between strike-normal, strike-parallel, geometric-mean, and maximum spectral demands in the near-fault region are investigated using 147 pairs of records selected from the NGA strong motion database. The selected records are for earthquakes with moment magnitude greater than 6.5 and for closest site-to-fault distance less than 15 km. Ratios of maximum spectral demand to NGA-predicted GMRotI50 for each pair of ground motions are presented. The ratio shows a clear dependence on period and the Somerville directivity parameters. Maximum demands can substantially exceed NGA-predicted GMRotI50 demands in the near-fault region, which has significant implications for seismic design, seismic performance assessment, and the next-generation seismic design maps. Strike-normal spectral demands are a significantly unconservative surrogate for maximum spectral demands for closest distance greater than 3 to 5 km. Scale factors that transform NGA-predicted GMRotI50 to a maximum spectral demand in the near-fault region are proposed.

  18. Percent of Impervious Cover

    Data.gov (United States)

    U.S. Environmental Protection Agency — High amounts of impervious cover (parking lots, rooftops, roads, etc.) can increase water runoff, which may directly enter surface water. Runoff from roads often...

  19. The 4-percent universe

    CERN Document Server

    Panek, Richard

    2012-01-01

    It is one of the most disturbing aspects of our universe: only four per cent of it consists of the matter that makes up every star, planet, and every book. The rest is completely unknown. Acclaimed science writer Richard Panek tells the story of the handful of scientists who have spent the past few decades on a quest to unlock the secrets of “dark matter" and the even stranger substance called “dark energy". These are perhaps the greatest mysteries in science,and solving them will reshape our understanding of the universe and our place in it. The stakes could not be higher. Panek's fast-paced

  20. Percent Forest Cover (Future)

    Data.gov (United States)

    U.S. Environmental Protection Agency — Forests provide economic and ecological value. High percentages of forest cover (FORPCTFuture) generally indicate healthier ecosystems and cleaner surface water....

  1. Percent Forest Cover

    Data.gov (United States)

    U.S. Environmental Protection Agency — Forests provide economic and ecological value. High percentages of forest cover (FORPCT) generally indicate healthier ecosystems and cleaner surface water. More...

  2. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    Science.gov (United States)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  3. The maximum percentage of fly ash to replace part of original Portland cement (OPC) in producing high strength concrete

    Science.gov (United States)

    Mallisa, Harun; Turuallo, Gidion

    2017-11-01

    This research investigates the maximum percent of fly ash to replace part of Orginal Portland Cement (OPC) in producing high strength concrete. Many researchers have found that the incorporation of industrial by-products such as fly ash as in producing concrete can improve properties in both fresh and hardened state of concrete. The water-binder ratio was used 0.30. The used sand was medium sand with the maximum size of coarse aggregate was 20 mm. The cement was Type I, which was Bosowa Cement produced by PT Bosowa. The percentages of fly ash to the total of a binder, which were used in this research, were 0, 10, 15, 20, 25 and 30%; while the super platicizer used was typed Naptha 511P. The results showed that the replacement cement up to 25 % of the total weight of binder resulted compressive strength higher than the minimum strength at one day of high-strength concrete.

  4. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    Science.gov (United States)

    Tan, Elcin

    physically possible upper limits of precipitation due to climate change. The simulation results indicate that the meridional shift in atmospheric conditions is the optimum method to determine maximum precipitation in consideration of cost and efficiency. Finally, exceedance probability analyses of the model results of 42 historical extreme precipitation events demonstrate that the 72-hr basin averaged probable maximum precipitation is 21.72 inches for the exceedance probability of 0.5 percent. On the other hand, the current operational PMP estimation for the American River Watershed is 28.57 inches as published in the hydrometeorological report no. 59 and a previous PMP value was 31.48 inches as published in the hydrometeorological report no. 36. According to the exceedance probability analyses of this proposed method, the exceedance probabilities of these two estimations correspond to 0.036 percent and 0.011 percent, respectively.

  5.  Running speed during training and percent body fat predict race time in recreational male marathoners

    Directory of Open Access Journals (Sweden)

    Barandun U

    2012-07-01

    Full Text Available  Background: Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners.Methods: Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times.Results: After multivariate regression, running speed of the training units (β=-0.52, P<0.0001 and percent body fat (β=0.27, P <0.0001 were the two variables most strongly correlated with marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r2 = 0.44: race time (minutes = 326.3 + 2.394 × (percent body fat, % – 12.06 × (speed in training, km/hours. Running speed during training sessions correlated with prerace percent body fat (r=0.33, P=0.0002. The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics.Conclusion: The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour are two key factors for a fast marathon race time in recreational male marathoner runners.Keywords: body fat, skinfold thickness, anthropometry, endurance, athlete

  6. The Divisia real energy intensity indices: Evolution and attribution of percent changes in 20 European countries from 1995 to 2010

    International Nuclear Information System (INIS)

    Fernández González, P.; Landajo, M.; Presno, M.J.

    2013-01-01

    This paper analyzes the evolution of real energy efficiency in the European Union and the attribution across countries of its percent change. Relying on a multiplicative energy intensity approach that is implemented through the Sato-Vartia Logarithmic Mean Divisia Index method, we decompose the change in aggregate energy intensity in 20 European countries for the period from 1995 to 2010. A comparative analysis of real energy intensity indices is also carried out. In addition, a new tool to monitor changes in real energy intensity in greater detail is applied. The attribution analysis of IDA (Index Decomposition Analysis) as proposed by Choi and Ang (Choi KH, Ang BW. Attribution of changes in Divisia real energy intensity index – an extension to index decomposition analysis. Energy Economics 2012;34:171–6) is used in order to assess the contribution of each individual sector to the percent change in real energy intensity. Results indicate that the European countries, particularly the former communist ones, made a remarkable effort to improve energy efficiency. Our analysis also suggests some strategies –including promotion and adaptation to more efficient techniques, innovation, improved use of technologies, R and D, and substitution for higher quality energies-, which are of particular interest to the industry sector -including construction- in ex-communist EU members, and to the industry and transport plus hotels and restaurants sectors in Western countries. - Highlights: • We apply a single and multi-period attribution analysis approach [1]. • Technical change, improved use of tech and quality energies, keys to AEI drop. • Real energy intensity shows valuable progress in former communist European members. • The biggest attribution of percent change in real energy intensity was to Industry. • Western EU: Services and Agriculture poor contributors to real energy intensity drop

  7. Spontaneous entropy decrease and its statistical formula

    OpenAIRE

    Xing, Xiu-San

    2007-01-01

    Why can the world resist the law of entropy increase and produce self-organizing structure? Does the entropy of an isolated system always only increase and never decrease? Can be thermodymamic degradation and self-organizing evolution united? How to unite? In this paper starting out from nonequilibrium entropy evolution equation we proved that a new entropy decrease could spontaneously emerge in nonequilibrium system with internal attractive interaction. This new entropy decrease coexists wit...

  8. Psychophysical basis for maximum pushing and pulling forces: A review and recommendations.

    Science.gov (United States)

    Garg, Arun; Waters, Thomas; Kapellusch, Jay; Karwowski, Waldemar

    2014-03-01

    The objective of this paper was to perform a comprehensive review of psychophysically determined maximum acceptable pushing and pulling forces. Factors affecting pushing and pulling forces are identified and discussed. Recent studies show a significant decrease (compared to previous studies) in maximum acceptable forces for males but not for females when pushing and pulling on a treadmill. A comparison of pushing and pulling forces measured using a high inertia cart with those measured on a treadmill shows that the pushing and pulling forces using high inertia cart are higher for males but are about the same for females. It is concluded that the recommendations of Snook and Ciriello (1991) for pushing and pulling forces are still valid and provide reasonable recommendations for ergonomics practitioners. Regression equations as a function of handle height, frequency of exertion and pushing/pulling distance are provided to estimate maximum initial and sustained forces for pushing and pulling acceptable to 75% male and female workers. At present it is not clear whether pushing or pulling should be favored. Similarly, it is not clear what handle heights would be optimal for pushing and pulling. Epidemiological studies are needed to determine relationships between psychophysically determined maximum acceptable pushing and pulling forces and risk of musculoskeletal injuries, in particular to low back and shoulders.

  9. Maximum vehicle cabin temperatures under different meteorological conditions

    Science.gov (United States)

    Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John

    2009-05-01

    A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.

  10. Fractal Dimension and Maximum Sunspot Number in Solar Cycle

    Directory of Open Access Journals (Sweden)

    R.-S. Kim

    2006-09-01

    Full Text Available The fractal dimension is a quantitative parameter describing the characteristics of irregular time series. In this study, we use this parameter to analyze the irregular aspects of solar activity and to predict the maximum sunspot number in the following solar cycle by examining time series of the sunspot number. For this, we considered the daily sunspot number since 1850 from SIDC (Solar Influences Data analysis Center and then estimated cycle variation of the fractal dimension by using Higuchi's method. We examined the relationship between this fractal dimension and the maximum monthly sunspot number in each solar cycle. As a result, we found that there is a strong inverse relationship between the fractal dimension and the maximum monthly sunspot number. By using this relation we predicted the maximum sunspot number in the solar cycle from the fractal dimension of the sunspot numbers during the solar activity increasing phase. The successful prediction is proven by a good correlation (r=0.89 between the observed and predicted maximum sunspot numbers in the solar cycles.

  11. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.; Ito, N.

    2013-01-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  12. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.

    2013-10-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  13. How long do centenarians survive? Life expectancy and maximum lifespan.

    Science.gov (United States)

    Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A

    2017-08-01

    The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  14. Ceruletide decreases food intake in non-obese man.

    Science.gov (United States)

    Stacher, G; Steinringer, H; Schmierer, G; Schneider, C; Winklehner, S

    1982-01-01

    Cholecystokinin decreases food intake in animals and in man. This study investigated whether the structurally related ceruletide reduces food intake in healthy non-obese man. Twelve females and 12 males participated, after an over-night fast, in each of two experiments. During the basal 40 min, saline was infused IV. Thereafter, the infusion was, in random double blind fashion, either continued with saline or switched to 60 or 120 ng/kg b. wt/hr ceruletide. Butter was melted in a pan and scrambled eggs with ham were prepared in front of the subjects, who were instructed to eat, together with bread and mallow tea, as much as they wanted. With 120 ng/kg/hr ceruletide, the subjects ate significantly less (16.8 percent) than with saline (3725 kJ +/- 489 SEM and 4340 kJ +/- 536, respectively; p less than 0.025). They also reported less hunger (p less than 0.005) and activation (p less than 0.005) and activation (p less than 0.01), and had longer reaction times (p less than 0.01) and a weaker psychomotor performance (p less than 0.025). 60 ng/kg/hr ceruletide decreased food intake only slightly (6.6%; 3089 kJ +/- 253 and 3292 kJ +/- 300 respectively) and no significant changes in the above measures occurred. In conclusion, ceruletide reduces food intake in man, thus resembling the effects of cholecystokinin.

  15. Review of revised Klamath River Total Maximum Daily Load models from Link River Dam to Keno Dam, Oregon

    Science.gov (United States)

    Rounds, Stewart A.; Sullivan, Annett B.

    2013-01-01

    20°C, causing the model to predict higher dissolved oxygen (DO) concentrations in spring, autumn, and winter. Although that change to the temperature dependence function was done to make the function more similar to the model’s default, this change was not accompanied by any documentation of recalibration or sensitivity exercises. The maximum SOD rate for the 2002 current conditions scenario was decreased from 3.0 grams per square meter per day (g/m2/d) in the original model to 2.0 g/m2/d in the revised model, a considerable adjustment that appears to have been needed to offset effects of a change to another variable (O2LIM) that would have resulted in a substantial increase in the effective SOD rate for 2002. A 50-percent decrease in the SOD rate over a 2-year period, however, is not likely to be mirrored by field measurements, so this change may be compensating for some process that is not represented correctly in the DO budget for the current conditions scenarios. Several important changes were made to the natural conditions scenario. First, the elevation of the Keno reef was corrected; the elevation specified in the original model was 1 foot too high, which affected the volume of the pooled reach and the travel time through it. The most important changes to this scenario were to the upstream boundary inputs of organic matter and algae, which affect incoming fluxes of nitrogen and phosphorus. Algal biomass inputs were increased by approximately 60 percent during summer because of a change in the way those inputs were derived from results of the UKL TMDL model. Non-algal organic matter inputs were decreased, particularly in summer to correct a problem attributed to double-counting of phosphorus in the original inputs. The distribution of non-algal organic matter was changed from 20 percent dissolved in the original model to 90 percent dissolved in the revised model in response to review comments and published data. The overall sum of algal biomass and non

  16. Modeling multisite streamflow dependence with maximum entropy copula

    Science.gov (United States)

    Hao, Z.; Singh, V. P.

    2013-10-01

    Synthetic streamflows at different sites in a river basin are needed for planning, operation, and management of water resources projects. Modeling the temporal and spatial dependence structure of monthly streamflow at different sites is generally required. In this study, the maximum entropy copula method is proposed for multisite monthly streamflow simulation, in which the temporal and spatial dependence structure is imposed as constraints to derive the maximum entropy copula. The monthly streamflows at different sites are then generated by sampling from the conditional distribution. A case study for the generation of monthly streamflow at three sites in the Colorado River basin illustrates the application of the proposed method. Simulated streamflow from the maximum entropy copula is in satisfactory agreement with observed streamflow.

  17. Quality, precision and accuracy of the maximum No. 40 anemometer

    Energy Technology Data Exchange (ETDEWEB)

    Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  18. Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules

    DEFF Research Database (Denmark)

    Gao, Junling; Chen, Min

    2013-01-01

    Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...... that the main cause is the influence of various currents on the produced electromotive potential. A simple and effective calibration method is proposed to minimize the deviations in specifying the maximum power. Experimental results validate the method with improved estimation accuracy....

  19. Mass mortality of the vermetid gastropod Ceraesignum maximum

    Science.gov (United States)

    Brown, A. L.; Frazer, T. K.; Shima, J. S.; Osenberg, C. W.

    2016-09-01

    Ceraesignum maximum (G.B. Sowerby I, 1825), formerly Dendropoma maximum, was subject to a sudden, massive die-off in the Society Islands, French Polynesia, in 2015. On Mo'orea, where we have detailed documentation of the die-off, these gastropods were previously found in densities up to 165 m-2. In July 2015, we surveyed shallow back reefs of Mo'orea before, during and after the die-off, documenting their swift decline. All censused populations incurred 100% mortality. Additional surveys and observations from Mo'orea, Tahiti, Bora Bora, and Huahine (but not Taha'a) suggested a similar, and approximately simultaneous, die-off. The cause(s) of this cataclysmic mass mortality are currently unknown. Given the previously documented negative effects of C. maximum on corals, we expect the die-off will have cascading effects on the reef community.

  20. Stationary neutrino radiation transport by maximum entropy closure

    International Nuclear Information System (INIS)

    Bludman, S.A.

    1994-11-01

    The authors obtain the angular distributions that maximize the entropy functional for Maxwell-Boltzmann (classical), Bose-Einstein, and Fermi-Dirac radiation. In the low and high occupancy limits, the maximum entropy closure is bounded by previously known variable Eddington factors that depend only on the flux. For intermediate occupancy, the maximum entropy closure depends on both the occupation density and the flux. The Fermi-Dirac maximum entropy variable Eddington factor shows a scale invariance, which leads to a simple, exact analytic closure for fermions. This two-dimensional variable Eddington factor gives results that agree well with exact (Monte Carlo) neutrino transport calculations out of a collapse residue during early phases of hydrostatic neutron star formation

  1. Estimating the maximum potential revenue for grid connected electricity storage :

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.

    2012-12-01

    The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the

  2. FINANCIAL PROBLEMS OF BOTTOM 40 PERCENT BUMIPUTERA IN MALAYSIA: A POSSIBLE SOLUTION THROUGH WAQF-BASED CROWDFUNDING

    Directory of Open Access Journals (Sweden)

    Abidullah ABID

    2017-02-01

    Full Text Available Low savings by the bottom 40 percent Bumiputera triggered low wealth accumulation and greater wealth inequality. The issue behind the low savings is the increase in food prices, taxes, and interest rates for the borrowers. In response to these problems, the Malaysian government provides cash transfer BRIM as one of redistributive measures. However, it is still not enough as many of the bottom 40 percent are neglected to avail the facility. In such circumstances, the role of community-based cash and credit transfer schemes such as cash waqf can be more fruitful. However due to inefficiency of the awqaf institutions regarding financial management, the system cannot contribute effectively. Therefore, the main aim of the paper is to provide an efficient mechanism which can ensure effectiveness in terms of transparency, collection and distribution of cash endowments and its benefits. Hence, by means of library research, we have proposed a conceptual framework. It is suggested that collection and distribution of cash waqf through crowdfunding platform would solve both transparency, collection and distribution issues. This study would provide a ground for those researchers and practitioners who are working on finding the right approach to implement waqf in more efficient way.

  3. Percent voluntary inactivation and peak force predictions with the interpolated twitch technique in individuals with high ability of voluntary activation

    International Nuclear Information System (INIS)

    Herda, Trent J; Walter, Ashley A; Hoge, Katherine M; Stout, Jeffrey R; Costa, Pablo B; Ryan, Eric D; Cramer, Joel T

    2011-01-01

    The purpose of this study was to examine the sensitivity and peak force prediction capability of the interpolated twitch technique (ITT) performed during submaximal and maximal voluntary contractions (MVCs) in subjects with the ability to maximally activate their plantar flexors. Twelve subjects performed two MVCs and nine submaximal contractions with the ITT method to calculate percent voluntary inactivation (%VI). Additionally, two MVCs were performed without the ITT. Polynomial models (linear, quadratic and cubic) were applied to the 10–90% VI and 40–90% VI versus force relationships to predict force. Peak force from the ITT MVC was 6.7% less than peak force from the MVC without the ITT. Fifty-eight percent of the 10–90% VI versus force relationships were best fit with nonlinear models; however, all 40–90% VI versus force relationships were best fit with linear models. Regardless of the polynomial model or the contraction intensities used to predict force, all models underestimated the actual force from 22% to 28%. There was low sensitivity of the ITT method at high contraction intensities and the predicted force from polynomial models significantly underestimated the actual force. Caution is warranted when interpreting the % VI at high contraction intensities and predicted peak force from submaximal contractions

  4. Effects of age, adipose percent, and reproduction on PCB concentrations and profiles in an extreme fasting North Pacific marine mammal.

    Directory of Open Access Journals (Sweden)

    Sarah H Peterson

    Full Text Available Persistent organic pollutants, including polychlorinated biphenyls (PCBs, are widely distributed and detectable far from anthropogenic sources. Northern elephant seals (Mirounga angustirostris biannually travel thousands of kilometers to forage in coastal and open-ocean regions of the northeast Pacific Ocean and then return to land where they fast while breeding and molting. Our study examined potential effects of age, adipose percent, and the difference between the breeding and molting fasts on PCB concentrations and congener profiles in blubber and serum of northern elephant seal females. Between 2005 and 2007, we sampled blubber and blood from 58 seals before and after a foraging trip, which were then analyzed for PCBs. Age did not significantly affect total PCB concentrations; however, the proportion of PCB congeners with different numbers of chlorine atoms was significantly affected by age, especially in the outer blubber. Younger adult females had a significantly greater proportion of low-chlorinated PCBs (tri-, tetra-, and penta-CBs than older females, with the opposite trend observed for hepta-CBs, indicating that an age-associated process such as parity (birth may significantly affect congener profiles. The percent of adipose tissue had a significant relationship with inner blubber PCB concentrations, with the highest mean concentrations observed at the end of the molting fast. These results highlight the importance of sampling across the entire blubber layer when assessing contaminant levels in phocid seals and taking into account the adipose stores and reproductive status of an animal when conducting contaminant research.

  5. Analysis of small field percent depth dose and profiles: Comparison of measurements with various detectors and effects of detector orientation with different jaw settings

    Directory of Open Access Journals (Sweden)

    Henry Finlay Godson

    2016-01-01

    Full Text Available The advent of modern technologies in radiotherapy poses an increased challenge in the determination of dosimetric parameters of small fields that exhibit a high degree of uncertainty. Percent depth dose and beam profiles were acquired using different detectors in two different orientations. The parameters such as relative surface dose (DS, depth of dose maximum (Dmax, percentage dose at 10 cm (D10, penumbral width, flatness, and symmetry were evaluated with different detectors. The dosimetric data were acquired for fields defined by jaws alone, multileaf collimator (MLC alone, and by MLC while the jaws were positioned at 0, 0.25, 0.5, and 1.0 cm away from MLC leaf-end using a Varian linear accelerator with 6 MV photon beam. The accuracy in the measurement of dosimetric parameters with various detectors for three different field definitions was evaluated. The relative DS(38.1% with photon field diode in parallel orientation was higher than electron field diode (EFD (27.9% values for 1 cm ×1 cm field. An overestimation of 5.7% and 8.6% in D10depth were observed for 1 cm ×1 cm field with RK ion chamber in parallel and perpendicular orientation, respectively, for the fields defined by MLC while jaw positioned at the edge of the field when compared to EFD values in parallel orientation. For this field definition, the in-plane penumbral widths obtained with ion chamber in parallel and perpendicular orientation were 3.9 mm, 5.6 mm for 1 cm ×1 cm field, respectively. Among all detectors used in the study, the unshielded diodes were found to be an appropriate choice of detector for the measurement of beam parameters in small fields.

  6. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    International Nuclear Information System (INIS)

    Ning, A; Dykes, K

    2014-01-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent

  7. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    Science.gov (United States)

    Ning, A.; Dykes, K.

    2014-06-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent.

  8. Mastery Learning and the Decreasing Variability Hypothesis.

    Science.gov (United States)

    Livingston, Jennifer A.; Gentile, J. Ronald

    1996-01-01

    This report results from studies that tested two variations of Bloom's decreasing variability hypothesis using performance on successive units of achievement in four graduate classrooms that used mastery learning procedures. Data do not support the decreasing variability hypothesis; rather, they show no change over time. (SM)

  9. Discontinuity of maximum entropy inference and quantum phase transitions

    International Nuclear Information System (INIS)

    Chen, Jianxin; Ji, Zhengfeng; Yu, Nengkun; Zeng, Bei; Li, Chi-Kwong; Poon, Yiu-Tung; Shen, Yi; Zhou, Duanlu

    2015-01-01

    In this paper, we discuss the connection between two genuinely quantum phenomena—the discontinuity of quantum maximum entropy inference and quantum phase transitions at zero temperature. It is shown that the discontinuity of the maximum entropy inference of local observable measurements signals the non-local type of transitions, where local density matrices of the ground state change smoothly at the transition point. We then propose to use the quantum conditional mutual information of the ground state as an indicator to detect the discontinuity and the non-local type of quantum phase transitions in the thermodynamic limit. (paper)

  10. On an Objective Basis for the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    David J. Miller

    2015-01-01

    Full Text Available In this letter, we elaborate on some of the issues raised by a recent paper by Neapolitan and Jiang concerning the maximum entropy (ME principle and alternative principles for estimating probabilities consistent with known, measured constraint information. We argue that the ME solution for the “problematic” example introduced by Neapolitan and Jiang has stronger objective basis, rooted in results from information theory, than their alternative proposed solution. We also raise some technical concerns about the Bayesian analysis in their work, which was used to independently support their alternative to the ME solution. The letter concludes by noting some open problems involving maximum entropy statistical inference.

  11. The maximum economic depth of groundwater abstraction for irrigation

    Science.gov (United States)

    Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.

    2017-12-01

    Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of

  12. Efficiency of autonomous soft nanomachines at maximum power.

    Science.gov (United States)

    Seifert, Udo

    2011-01-14

    We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.

  13. A comparison of methods of predicting maximum oxygen uptake.

    OpenAIRE

    Grant, S; Corbett, K; Amjad, A M; Wilson, J; Aitchison, T

    1995-01-01

    The aim of this study was to compare the results from a Cooper walk run test, a multistage shuttle run test, and a submaximal cycle test with the direct measurement of maximum oxygen uptake on a treadmill. Three predictive tests of maximum oxygen uptake--linear extrapolation of heart rate of VO2 collected from a submaximal cycle ergometer test (predicted L/E), the Cooper 12 min walk, run test, and a multi-stage progressive shuttle run test (MST)--were performed by 22 young healthy males (mean...

  14. Maximum length scale in density based topology optimization

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Wang, Fengwen

    2017-01-01

    The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...

  15. A Maximum Entropy Method for a Robust Portfolio Problem

    Directory of Open Access Journals (Sweden)

    Yingying Xu

    2014-06-01

    Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.

  16. Tribal implementation of a patient-centred medical home model in Alaska accompanied by decreased hospital use

    Directory of Open Access Journals (Sweden)

    Janet M. Johnston

    2013-08-01

    Full Text Available Background. Between 1995 and 1998, tribally owned Southcentral Foundation (SCF incrementally assumed responsibility from the Indian Health Service (IHS for primary care services on the Alaska Native Medical Center (ANMC campus in Anchorage, Alaska. In 1999, SCF began implementing components of a Patient-Centered Medical Home (PCMH model to improve access and continuity of care. Objective. To evaluate hospitalisation trends before, during and after PCMH implementation. Design. Time series analysis of aggregated medical record data. Methods. Regression analysis with correlated errors was used to estimate trends over time for the percent of customer-owners hospitalised overall and for specific conditions during 4 time periods (March 1996–July 1999: SCF assumes responsibility for primary care; August 1999–July 2000: PCMH implementation starts; August 2000–April 2005: early post-PCMH implementation; May 2005–December 2009: later post-PCMH implementation. Analysis was restricted to individuals residing in Southcentral Alaska and receiving health care at ANMC. Results. The percent of SCF customer-owners hospitalised per month for any reason was steady before and during PCMH implementation, declined steadily immediately following implementation and subsequently stabilised. The percent hospitalised per month for unintentional injury or poisoning also declined during and after the PCMH implementation. Among adult asthma patients, the percent hospitalised annually for asthma declined prior to and during implementation and remained lower thereafter. The percent of heart failure patients hospitalised annually for heart failure remained relatively constant throughout the study period while the percent of hypertension patients hospitalised for hypertension shifted higher between 1999 and 2002 compared to earlier and later years. Conclusion. Implementation of PCMH at SCF was accompanied by decreases in the percent of customer-owners hospitalised monthly

  17. Estimate of annual daily maximum rainfall and intense rain equation for the Formiga municipality, MG, Brazil

    Directory of Open Access Journals (Sweden)

    Giovana Mara Rodrigues Borges

    2016-11-01

    Full Text Available Knowledge of the probabilistic behavior of rainfall is extremely important to the design of drainage systems, dam spillways, and other hydraulic projects. This study therefore examined statistical models to predict annual daily maximum rainfall as well as models of heavy rain for the city of Formiga - MG. To do this, annual maximum daily rainfall data were ranked in decreasing order that best describes the statistical distribution by exceedance probability. Daily rainfall disaggregation methodology was used for the intense rain model studies and adjusted with Intensity-Duration-Frequency (IDF and Exponential models. The study found that the Gumbel model better adhered to the data regarding observed frequency as indicated by the Chi-squared test, and that the exponential model best conforms to the observed data to predict intense rains.

  18. Evolution of the Nova Vulpeculae no.1 1968 (LV Vul) spectrum after the maximum brightness

    International Nuclear Information System (INIS)

    Andrijya, I.; Antipova, L.I.; Babaev, M.B.; AN Azerbajdzhanskoj SSR, Baku. Shemakhinskaya Astrofizicheskaya Observatoriya)

    1986-01-01

    The analysis of the spectral evolution of LV Vulpeculae 1968 after the maximum brightness was carried out. It is shown that the pre-maximum spectrum was replaced by the principal one in less than 24sup(h). The diffuse enhanced scectrum and the Orion one existed already when the Nova brightness has decreased only by 0.4sup(m) and 0.5sup(m) respectively. The radial velocities of the Orion spectrum coincided with those of the diffuse enhanced one during the total observational period. The Orion spectrum consists of the lines of He I, N2, O 2 and may be H 1. The appearance of two additional components is probably due to splitting of the principal and diffuse enhanced spectrum

  19. EXTREME MAXIMUM AND MINIMUM AIR TEMPERATURE IN MEDİTERRANEAN COASTS IN TURKEY

    Directory of Open Access Journals (Sweden)

    Barbaros Gönençgil

    2016-01-01

    Full Text Available In this study, we determined extreme maximum and minimum temperatures in both summer and winter seasons at the stations in the Mediterranean coastal areas of Turkey.In the study, the data of 24 meteorological stations for the daily maximum and minimumtemperatures of the period from 1970–2010 were used. From this database, a set of four extreme temperature indices applied warm (TX90 and cold (TN10 days and warm spells (WSDI and cold spell duration (CSDI. The threshold values were calculated for each station to determine the temperatures that were above and below the seasonal norms in winter and summer. The TX90 index displays a positive statistically significant trend, while TN10 display negative nonsignificant trend. The occurrence of warm spells shows statistically significant increasing trend while the cold spells shows significantly decreasing trend over the Mediterranean coastline in Turkey.

  20. Effect of the Maximum Dose on White Matter Fiber Bundles Using Longitudinal Diffusion Tensor Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Tong; Chapman, Christopher H. [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Tsien, Christina [Department of Radiation Oncology, Washington University at St Louis, St Louis, Missouri (United States); Kim, Michelle; Spratt, Daniel E.; Lawrence, Theodore S. [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Cao, Yue, E-mail: yuecao@umich.edu [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Department of Radiology, University of Michigan, Ann Arbor, Michigan (United States); Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan (United States)

    2016-11-01

    Purpose: Previous efforts to decrease neurocognitive effects of radiation focused on sparing isolated cortical structures. We hypothesize that understanding temporal, spatial, and dosimetric patterns of radiation damage to whole-brain white matter (WM) after partial-brain irradiation might also be important. Therefore, we carried out a study to develop the methodology to assess radiation therapy (RT)–induced damage to whole-brain WM bundles. Methods and Materials: An atlas-based, automated WM tractography analysis was implemented to quantify longitudinal changes in indices of diffusion tensor imaging (DTI) of 22 major WM fibers in 33 patients with predominantly low-grade or benign brain tumors treated by RT. Six DTI scans per patient were performed from before RT to 18 months after RT. The DTI indices and planned doses (maximum and mean doses) were mapped onto profiles of each of 22 WM bundles. A multivariate linear regression was performed to determine the main dose effect as well as the influence of other clinical factors on longitudinal percentage changes in axial diffusivity (AD) and radial diffusivity (RD) from before RT. Results: Among 22 fiber bundles, AD or RD changes in 12 bundles were affected significantly by doses (P<.05), as the effect was progressive over time. In 9 elongated tracts, decreased AD or RD was significantly related to maximum doses received, consistent with a serial structure. In individual bundles, AD changes were up to 11.5% at the maximum dose locations 18 months after RT. The dose effect on WM was greater in older female patients than younger male patients. Conclusions: Our study demonstrates for the first time that the maximum dose to the elongated WM bundles causes post-RT damage in WM. Validation and correlative studies are necessary to determine the ability and impact of sparing these bundles on preserving neurocognitive function after RT.

  1. TOPEX/El Nino Watch - El Nino Warm Water Pool Decreasing, Jan, 08, 1998

    Science.gov (United States)

    1998-01-01

    This image of the Pacific Ocean was produced using sea surface height measurements taken by the U.S.-French TOPEX/Poseidon satellite. The image shows sea surface height relative to normal ocean conditions on Jan. 8, 1998, and sea surface height is an indicator of the heat content of the ocean. The volume of the warm water pool related to the El Nino has decreased by about 40 percent since its maximum in early November, but the area of the warm water pool is still about one and a half times the size of the continental United States. The volume measurements are computed as the sum of all the sea surface height changes as compared to normal ocean conditions. In addition, the maximum water temperature in the eastern tropical Pacific, as measured by the National Oceanic and Atmospheric Administration (NOAA), is still higher than normal. Until these high temperatures diminish, the El Nino warm water pool still has great potential to disrupt global weather because the high water temperatures directly influence the atmosphere. Oceanographers believe the recent decrease in the size of the warm water pool is a normal part of El Nino's natural rhythm. TOPEX/Poseidon has been tracking these fluctuations of the El Nino warm pool since it began in early 1997. These sea surface height measurements have provided scientists with their first detailed view of how El Nino's warm pool behaves because the TOPEX/Poseidon satellite measures the changing sea surface height with unprecedented precision. In this image, the white and red areas indicate unusual patterns of heat storage; in the white areas, the sea surface is between 14 and 32 centimeters (6 to 13 inches) above normal; in the red areas, it's about 10 centimeters (4 inches) above normal. The green areas indicate normal conditions, while purple (the western Pacific) means at least 18 centimeters (7 inches) below normal sea level.The El Nino phenomenon is thought to be triggered when the steady westward blowing trade winds weaken

  2. Wetland methane emissions during the Last Glacial Maximum estimated from PMIP2 simulations: climate, vegetation and geographic controls

    NARCIS (Netherlands)

    Weber, S.L.; Drury, A.J.; Toonen, W.H.J.; Weele, M. van

    2010-01-01

    It is an open question to what extent wetlands contributed to the interglacial‐glacial decrease in atmospheric methane concentration. Here we estimate methane emissions from glacial wetlands, using newly available PMIP2 simulations of the Last Glacial Maximum (LGM) climate from coupled

  3. MAXIMUM RUNOFF OF THE FLOOD ON WADIS OF NORTHERN ...

    African Journals Online (AJOL)

    lanez

    The technique of account the maximal runoff of flood for the rivers of northern part of Algeria based on the theory of ... north to south: 1) coastal Tel – fertile, high cultivated and sown zone; 2) territory of Atlas. Mountains ... In the first case the empiric dependence between maximum intensity of precipitation for some calculation ...

  4. Scientific substantination of maximum allowable concentration of fluopicolide in water

    Directory of Open Access Journals (Sweden)

    Pelo I.М.

    2014-03-01

    Full Text Available In order to substantiate fluopicolide maximum allowable concentration in the water of water reservoirs the research was carried out. Methods of study: laboratory hygienic experiment using organoleptic and sanitary-chemical, sanitary-toxicological, sanitary-microbiological and mathematical methods. The results of fluopicolide influence on organoleptic properties of water, sanitary regimen of reservoirs for household purposes were given and its subthreshold concentration in water by sanitary and toxicological hazard index was calculated. The threshold concentration of the substance by the main hazard criteria was established, the maximum allowable concentration in water was substantiated. The studies led to the following conclusions: fluopicolide threshold concentration in water by organoleptic hazard index (limiting criterion – the smell – 0.15 mg/dm3, general sanitary hazard index (limiting criteria – impact on the number of saprophytic microflora, biochemical oxygen demand and nitrification – 0.015 mg/dm3, the maximum noneffective concentration – 0.14 mg/dm3, the maximum allowable concentration - 0.015 mg/dm3.

  5. Image coding based on maximum entropy partitioning for identifying ...

    Indian Academy of Sciences (India)

    A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effectiveplatform for future emotion categorization ...

  6. Computing the maximum volume inscribed ellipsoid of a polytopic projection

    NARCIS (Netherlands)

    Zhen, Jianzhe; den Hertog, Dick

    We introduce a novel scheme based on a blending of Fourier-Motzkin elimination (FME) and adjustable robust optimization techniques to compute the maximum volume inscribed ellipsoid (MVE) in a polytopic projection. It is well-known that deriving an explicit description of a projected polytope is

  7. Computing the Maximum Volume Inscribed Ellipsoid of a Polytopic Projection

    NARCIS (Netherlands)

    Zhen, J.; den Hertog, D.

    2015-01-01

    We introduce a novel scheme based on a blending of Fourier-Motzkin elimination (FME) and adjustable robust optimization techniques to compute the maximum volume inscribed ellipsoid (MVE) in a polytopic projection. It is well-known that deriving an explicit description of a projected polytope is

  8. Maximum super angle optimization method for array antenna pattern synthesis

    DEFF Research Database (Denmark)

    Wu, Ji; Roederer, A. G

    1991-01-01

    Different optimization criteria related to antenna pattern synthesis are discussed. Based on the maximum criteria and vector space representation, a simple and efficient optimization method is presented for array and array fed reflector power pattern synthesis. A sector pattern synthesized by a 2...

  9. correlation between maximum dry density and cohesion of ...

    African Journals Online (AJOL)

    HOD

    investigation on sandy soils to determine the correlation between relative density and compaction test parameter. Using twenty soil samples, they were able to develop correlations between relative density, coefficient of uniformity and maximum dry density. Khafaji [5] using standard proctor compaction method carried out an ...

  10. Molecular markers linked to apomixis in Panicum maximum Jacq ...

    African Journals Online (AJOL)

    Panicum maximum Jacq. is an important forage grass of African origin largely used in the tropics. The genetic breeding of this species is based on the hybridization of sexual and apomictic genotypes and selection of apomictic F1 hybrids. The objective of this work was to identify molecular markers linked to apomixis in P.

  11. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...

  12. On a Weak Discrete Maximum Principle for hp-FEM

    Czech Academy of Sciences Publication Activity Database

    Šolín, Pavel; Vejchodský, Tomáš

    -, č. 209 (2007), s. 54-65 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA102/05/0629 Institutional research plan: CEZ:AV0Z20570509; CEZ:AV0Z10190503 Keywords : discrete maximum principle * hp-FEM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.943, year: 2007

  13. Gamma-ray spectra deconvolution by maximum-entropy methods

    International Nuclear Information System (INIS)

    Los Arcos, J.M.

    1996-01-01

    A maximum-entropy method which includes the response of detectors and the statistical fluctuations of spectra is described and applied to the deconvolution of γ-ray spectra. Resolution enhancement of 25% can be reached for experimental peaks and up to 50% for simulated ones, while the intensities are conserved within 1-2%. (orig.)

  14. Modeling maximum daily temperature using a varying coefficient regression model

    Science.gov (United States)

    Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith

    2014-01-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...

  15. Maximum Interconnectedness and Availability for Directional Airborne Range Extension Networks

    Science.gov (United States)

    2016-08-29

    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS 1 Maximum Interconnectedness and Availability for Directional Airborne Range Extension Networks Thomas...2 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS I. INTRODUCTION Tactical military networks both on land and at sea often have restricted transmission...a standard definition in graph theoretic and networking literature that is related to, but different from, the metric we consider. August 29, 2016

  16. Maximum of difference assessment of typical semitrailers: a global study

    CSIR Research Space (South Africa)

    Kienhofer, F

    2016-11-01

    Full Text Available the maximum allowable width and frontal overhang as stipulated by legislation from Australia, the European Union, Canada, the United States and South Africa. The majority of the Australian, EU and Canadian semitrailer combinations and all of the South African...

  17. The constraint rule of the maximum entropy principle

    NARCIS (Netherlands)

    Uffink, J.

    1995-01-01

    The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability

  18. 24 CFR 232.565 - Maximum loan amount.

    Science.gov (United States)

    2010-04-01

    ... URBAN DEVELOPMENT MORTGAGE AND LOAN INSURANCE PROGRAMS UNDER NATIONAL HOUSING ACT AND OTHER AUTHORITIES MORTGAGE INSURANCE FOR NURSING HOMES, INTERMEDIATE CARE FACILITIES, BOARD AND CARE HOMES, AND ASSISTED... Fire Safety Equipment Eligible Security Instruments § 232.565 Maximum loan amount. The principal amount...

  19. 5 CFR 531.221 - Maximum payable rate rule.

    Science.gov (United States)

    2010-01-01

    ... before the reassignment. (ii) If the rate resulting from the geographic conversion under paragraph (c)(2... previous rate (i.e., the former special rate after the geographic conversion) with the rates on the current... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Maximum payable rate rule. 531.221...

  20. Effects of bruxism on the maximum bite force

    Directory of Open Access Journals (Sweden)

    Todić Jelena T.

    2017-01-01

    Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.

  1. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  2. Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation

    DEFF Research Database (Denmark)

    Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik

    2017-01-01

    The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated...

  3. Handelman's hierarchy for the maximum stable set problem

    NARCIS (Netherlands)

    Laurent, M.; Sun, Z.

    2014-01-01

    The maximum stable set problem is a well-known NP-hard problem in combinatorial optimization, which can be formulated as the maximization of a quadratic square-free polynomial over the (Boolean) hypercube. We investigate a hierarchy of linear programming relaxations for this problem, based on a

  4. New shower maximum trigger for electrons and photons at CDF

    International Nuclear Information System (INIS)

    Amidei, D.; Burkett, K.; Gerdes, D.; Miao, C.; Wolinski, D.

    1994-01-01

    For the 1994 Tevatron collider run, CDF has upgraded the electron and photo trigger hardware to make use of shower position and size information from the central shower maximum detector. For electrons, the upgrade has resulted in a 50% reduction in backgrounds while retaining approximately 90% of the signal. The new trigger also eliminates the background to photon triggers from single-phototube spikes

  5. New shower maximum trigger for electrons and photons at CDF

    International Nuclear Information System (INIS)

    Gerdes, D.

    1994-08-01

    For the 1994 Tevatron collider run, CDF has upgraded the electron and photon trigger hardware to make use of shower position and size information from the central shower maximum detector. For electrons, the upgrade has resulted in a 50% reduction in backgrounds while retaining approximately 90% of the signal. The new trigger also eliminates the background to photon triggers from single-phototube discharge

  6. Maximum drawdown and the allocation to real estate

    NARCIS (Netherlands)

    Hamelink, F.; Hoesli, M.

    2004-01-01

    The role of real estate in a mixed-asset portfolio is investigated when the maximum drawdown (hereafter MaxDD), rather than the standard deviation, is used as the measure of risk. In particular, it is analysed whether the discrepancy between the optimal allocation to real estate and the actual

  7. A Family of Maximum SNR Filters for Noise Reduction

    DEFF Research Database (Denmark)

    Huang, Gongping; Benesty, Jacob; Long, Tao

    2014-01-01

    significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR...

  8. 5 CFR 581.402 - Maximum garnishment limitations.

    Science.gov (United States)

    2010-01-01

    ... PROCESSING GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Consumer Credit Protection Act Restrictions..., pursuant to section 1673(b)(2) (A) and (B) of title 15 of the United States Code (the Consumer Credit... local law, the maximum part of the aggregate disposable earnings subject to garnishment to enforce any...

  9. Distribution of phytoplankton groups within the deep chlorophyll maximum

    KAUST Repository

    Latasa, Mikel; Cabello, Ana Marí a; Moran, Xose Anxelu G.; Massana, Ramon; Scharek, Renate

    2016-01-01

    and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer

  10. 44 CFR 208.12 - Maximum Pay Rate Table.

    Science.gov (United States)

    2010-10-01

    ...) Physicians. DHS uses the latest Special Salary Rate Table Number 0290 for Medical Officers (Clinical... Personnel, in which case the Maximum Pay Rate Table would not apply. (3) Compensation for Sponsoring Agency... organizations, e.g., HMOs or medical or engineering professional associations, under the revised definition of...

  11. Anti-nutrient components of guinea grass ( Panicum maximum ...

    African Journals Online (AJOL)

    Yomi

    2012-01-31

    Jan 31, 2012 ... A true measure of forage quality is animal ... The anti-nutritional contents of a pasture could be ... nutrient factors in P. maximum; (2) assess the effect of nitrogen ..... 3. http://www.clemson.edu/Fairfield/local/news/quality.

  12. SIMULATION OF NEW SIMPLE FUZZY LOGIC MAXIMUM POWER ...

    African Journals Online (AJOL)

    2010-06-30

    Jun 30, 2010 ... Basic structure photovoltaic system Solar array mathematic ... The equivalent circuit model of a solar cell consists of a current generator and a diode .... control of boost converter (tracker) such that maximum power is achieved at the output of the solar panel. Fig.11. The membership function of input. Fig.12.

  13. Sur les estimateurs du maximum de vraisemblance dans les mod& ...

    African Journals Online (AJOL)

    Abstract. We are interested in the existence and uniqueness of maximum likelihood estimators of parameters in the two multiplicative regression models, with Poisson or negative binomial probability distributions. Following its work on the multiplicative Poisson model with two factors without repeated measures, Haberman ...

  14. Gravitational Waves and the Maximum Spin Frequency of Neutron Stars

    NARCIS (Netherlands)

    Patruno, A.; Haskell, B.; D'Angelo, C.

    2012-01-01

    In this paper, we re-examine the idea that gravitational waves are required as a braking mechanism to explain the observed maximum spin frequency of neutron stars. We show that for millisecond X-ray pulsars, the existence of spin equilibrium as set by the disk/magnetosphere interaction is sufficient

  15. Applications of the Maximum Entropy Method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš

    2004-01-01

    Roč. 305, - (2004), s. 57-62 ISSN 0015-0193 Grant - others:DFG and FCI(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : Maximum Entropy Method * modulated structures * charge density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.517, year: 2004

  16. Phytophthora stricta isolated from Rhododendron maximum in Pennsylvania

    Science.gov (United States)

    During a survey in October 2013, in the Michaux State Forest in Pennsylvania , necrotic Rhododendron maximum leaves were noticed on mature plants alongside a stream. Symptoms were nondescript necrotic lesions at the tips of mature leaves. Colonies resembling a Phytophthora sp. were observed from c...

  17. Transversals and independence in linear hypergraphs with maximum degree two

    DEFF Research Database (Denmark)

    Henning, Michael A.; Yeo, Anders

    2017-01-01

    , k-uniform hypergraphs with maximum degree 2. It is known [European J. Combin. 36 (2014), 231–236] that if H ∈ Hk, then (k + 1)τ (H) 6 ≤ n + m, and there are only two hypergraphs that achieve equality in the bound. In this paper, we prove a much more powerful result, and establish tight upper bounds...

  18. A conrparison of optirnunl and maximum reproduction using the rat ...

    African Journals Online (AJOL)

    of pigs to increase reproduction rate of sows (te Brake,. 1978; Walker et at., 1979; Kemm et at., 1980). However, no experimental evidence exists that this strategy would in fact improve biological efficiency. In this pilot experiment, an attempt was made to compare systems of optimum or maximum reproduction using the rat.

  19. Revision of regional maximum flood (RMF) estimation in Namibia ...

    African Journals Online (AJOL)

    Extreme flood hydrology in Namibia for the past 30 years has largely been based on the South African Department of Water Affairs Technical Report 137 (TR 137) of 1988. This report proposes an empirically established upper limit of flood peaks for regions called the regional maximum flood (RMF), which could be ...

  20. Maximum entropy estimation via Gauss-LP quadratures

    NARCIS (Netherlands)

    Thély, Maxime; Sutter, Tobias; Mohajerin Esfahani, P.; Lygeros, John; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    2017-01-01

    We present an approximation method to a class of parametric integration problems that naturally appear when solving the dual of the maximum entropy estimation problem. Our method builds up on a recent generalization of Gauss quadratures via an infinite-dimensional linear program, and utilizes a