WorldWideScience

Sample records for part establishes average

  1. Establishment of Average Body Measurement and the Development ...

    African Journals Online (AJOL)

    cce

    body measurement for height and backneck to waist for ages 2,3,4 and 5 years. The ... average measurements of the different parts of the body must be established. ..... and OAU Charter on Rights of the child: Lagos: Nigeria Country office.

  2. 22 CFR 506.3 - Establishing and converting part-time positions.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Establishing and converting part-time positions. 506.3 Section 506.3 Foreign Relations BROADCASTING BOARD OF GOVERNORS PART-TIME CAREER EMPLOYMENT PROGRAM § 506.3 Establishing and converting part-time positions. Position management and other internal...

  3. 45 CFR 1176.4 - Establishing and converting part-time positions.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Establishing and converting part-time positions... FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE HUMANITIES PART-TIME CAREER EMPLOYMENT § 1176.4 Establishing and converting part-time positions. Position management and other internal reviews...

  4. 38 CFR 1.893 - Establishing and converting part-time positions.

    Science.gov (United States)

    2010-07-01

    ... converting part-time positions. 1.893 Section 1.893 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS GENERAL PROVISIONS Part-Time Career Employment Program § 1.893 Establishing and converting part-time positions. Position management and other internal reviews may indicate that positions may be...

  5. Averaged 30 year climate change projections mask opportunities for species establishment

    Science.gov (United States)

    Serra-Diaz, Josep M.; Franklin, Janet; Sweet, Lynn C.; McCullough, Ian M.; Syphard, Alexandra D.; Regan, Helen M.; Flint, Lorraine E.; Flint, Alan L.; Dingman, John; Moritz, Max A.; Redmond, Kelly T.; Hannah, Lee; Davis, Frank W.

    2016-01-01

    Survival of early life stages is key for population expansion into new locations and for persistence of current populations (Grubb 1977, Harper 1977). Relative to adults, these early life stages are very sensitive to climate fl uctuations (Ropert-Coudert et al. 2015), which often drive episodic or ‘event-limited’ regeneration (e.g. pulses) in long-lived plant species (Jackson et al. 2009). Th us, it is diffi cult to mechanistically associate 30-yr climate norms to dynamic processes involved in species range shifts (e.g. seedling survival). What are the consequences of temporal aggregation for estimating areas of potential establishment? We modeled seedling survival for three widespread tree species in California, USA ( Quercus douglasii, Q. kelloggii , Pinus sabiniana ) by coupling a large-scale, multi-year common garden experiment to high-resolution downscaled grids of climatic water defi cit and air temperature (Flint and Flint 2012, Supplementary material Appendix 1). We projected seedling survival for nine climate change projections in two mountain landscapes spanning wide elevation and moisture gradients. We compared areas with windows of opportunity for seedling survival – defi ned as three consecutive years of seedling survival in our species, a period selected based on studies of tree niche ontogeny (Supplementary material Appendix 1) – to areas of 30-yr averaged estimates of seedling survival. We found that temporal aggregation greatly underestimated the potential for species establishment (e.g. seedling survival) under climate change scenarios.

  6. Declining average daily census. Part 1: Implications and options.

    Science.gov (United States)

    Weil, T P

    1985-12-01

    A national trend toward declining average daily (inpatient) census (ADC) started in late 1982 even before the Medicare prospective payment system began. The decrease in total days will continue despite an increasing number of aged persons in the U.S. population. This decline could have been predicted from trends during 1978 to 1983, such as increasing available beds but decreasing occupancy, 100 percent increases in hospital expenses, and declining lengths of stay. Assuming that health care costs will remain as a relatively fixed part of the gross national product and no major medical advances will occur in the next five years, certain implications and options exist for facilities experiencing a declining ADC. This article discusses several considerations: Attempts to improve market share; Reduction of full-time equivalent employees; Impact of greater acuity of illness among remaining inpatients; Implications of increasing the number of physicians on medical staffs; Option of a closed medical staff by clinical specialty; Unbundling with not-for-profit and profit-making corporations; Review of mergers, consolidations, and multihospital systems to decide when this option is most appropriate; Sale of a not-for-profit hospital to an investor-owned chain, with implications facing Catholic hospitals choosing this option; Impact and difficulty of developing meaningful alternative health care systems with the hospital's medical staff; Special problems of teaching hospitals; The social issue of the hospital shifting from the community's health center to a cost center; Increased turnover of hospital CEOs; With these in mind, institutions can then focus on solutions that can sometimes be used in tandem to resolve this problem's impact. The second part of this article will discuss some of them.

  7. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  8. Medicare Part B Drug Average Sales Pricing Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  9. 77 FR 43232 - National School Lunch, Special Milk, and School Breakfast Programs, National Average Payments...

    Science.gov (United States)

    2012-07-24

    ...) establishes National Average Payments for free, reduced price and paid afterschool snacks as part of the...--free lunch-- 303 cents, reduced price lunch--263 cents. Afterschool Snacks in Afterschool Care Programs--The payments are: Contiguous States--free snack--78 cents, reduced price snack--39 cents, paid snack...

  10. A Survey of Optometry Graduates to Determine Practice Patterns: Part II: Licensure and Practice Establishment Experiences.

    Science.gov (United States)

    Bleimann, Robert L.; Smith, Lee W.

    1985-01-01

    A summary of Part II of a two-volume study of optometry graduates conducted by the Association of Schools and Colleges of Optometry is presented. Part II includes the analysis of the graduates' licensure and practice establishment experiences. (MLW)

  11. The Road to a Court of Appeal—Part II: Distinguishing Features and Establishment

    DEFF Research Database (Denmark)

    Butler, Graham

    2015-01-01

    of the road taken. By mapping the sequence of events that lead to the creation of the new court, the complexity that goes into large-scale judicial restructuring can begin to be fully appreciated. This is the second and concluding part of the article, covering the distinguishing features and establishment......-lasting effects on the judicial system of the state. The creation of a new court takes a considerable effort from a number of branches of the State, in formulating the correct path for its establishment to proceed. In this article, the history of a Court of Appeal is set out, before discussing the referendum...... to amend the Constitution to allow for it. This is followed by looking at some of the provisions of the Amendment Bill that was put before both the Oireachtas and the people, before looking at three distinguishing features of the Bill, and finally discussing its establishment in 2014, along with analysis...

  12. Unlearning Established Organizational Routines--Part II

    Science.gov (United States)

    Fiol, C. Marlena; O'Connor, Edward J.

    2017-01-01

    Purpose: The purpose of Part II of this two-part paper is to uncover important differences in the nature of the three unlearning subprocesses, which call for different leadership interventions to motivate people to move through them. Design/methodology/approach: The paper draws on research in behavioral medicine and psychology to demonstrate that…

  13. Estimating the average treatment effect on survival based on observational data and using partly conditional modeling.

    Science.gov (United States)

    Gong, Qi; Schaubel, Douglas E

    2017-03-01

    Treatments are frequently evaluated in terms of their effect on patient survival. In settings where randomization of treatment is not feasible, observational data are employed, necessitating correction for covariate imbalances. Treatments are usually compared using a hazard ratio. Most existing methods which quantify the treatment effect through the survival function are applicable to treatments assigned at time 0. In the data structure of our interest, subjects typically begin follow-up untreated; time-until-treatment, and the pretreatment death hazard are both heavily influenced by longitudinal covariates; and subjects may experience periods of treatment ineligibility. We propose semiparametric methods for estimating the average difference in restricted mean survival time attributable to a time-dependent treatment, the average effect of treatment among the treated, under current treatment assignment patterns. The pre- and posttreatment models are partly conditional, in that they use the covariate history up to the time of treatment. The pre-treatment model is estimated through recently developed landmark analysis methods. For each treated patient, fitted pre- and posttreatment survival curves are projected out, then averaged in a manner which accounts for the censoring of treatment times. Asymptotic properties are derived and evaluated through simulation. The proposed methods are applied to liver transplant data in order to estimate the effect of liver transplantation on survival among transplant recipients under current practice patterns. © 2016, The International Biometric Society.

  14. System for evaluation of the true average input-pulse rate

    International Nuclear Information System (INIS)

    Eichenlaub, D.P.; Garrett, P.

    1977-01-01

    The description is given of a digital radiation monitoring system making use of current digital circuit and microprocessor for rapidly processing the pulse data coming from remote radiation controllers. This system analyses the pulse rates in order to determine if a new datum is statistically the same as that previously received. Hence it determines the best possible average time for itself. So long as the true average pulse rate stays constant, the time required to establish an average can increase until the statistical error is under the desired level, i.e. 1%. When the digital processing of the pulse data indicates a change in the true average pulse rate, the time required to establish an average can be reduced so as to improve the response time of the system at the statistical error. This concept includes a fixed compromise between the statistical error and the response time [fr

  15. Sterilization of health care products - Radiation. Part 2: Establishing the sterilization dose

    International Nuclear Information System (INIS)

    2006-01-01

    This part of ISO 11137 describes methods that may be used to establish the sterilization dose in accordance with one of the two approaches specified in 8.2 of ISO 11137-1:2006. The methods used in these approaches are: a) dose setting to obtain a product-specific dose; b) dose substantiation to verify a preselected dose of 25 kGy or 15 kGy. The basis of the dose setting methods described in this part of ISO 11137 (Methods 1 and 2) owe much to the ideas first propounded by Tallentire (Tallentire, 1973 [17]; Tallentire, Dwyer and Ley, 1971 [18]; Tallentire and Khan, 1978 [19]). Subsequently, standardized protocols were developed (Davis et al., 1981 [8]; Davis, Strawderman and Whitby, 1984 [9]) which formed the basis of the dose setting methods detailed in the AAMI Recommended Practice for Sterilization by Gamma Radiation (AAMI 1984, 1991 [4], [6]). Methods 1 and 2 and the associated sterilization dose audit procedures use data derived from the inactivation of the microbial population in its natural state on product. The methods are based on a probability model for the inactivation of microbial populations. The probability model, as applied to bioburden made up of a mixture of various microbial species, assumes that each such species has its own unique D 10 value. In the model, the probability that an item will possess a surviving microorganism after exposure to a given dose of radiation is defined in terms of the initial number of microorganisms on the item prior to irradiation and the D 10 values of the microorganisms. The methods involve performance of tests of sterility on product items that have received doses of radiation lower than the sterilization dose. The outcome of these tests is used to predict the dose needed to achieve a predetermined sterility assurance level, SAL. Methods 1 and 2 may also be used to substantiate 25 kGy if, on performing a dose setting exercise, the derived sterilization dose for an SAL of 10 -6 is u ≤25 kGy. The basis of the method

  16. 75 FR 78157 - Farmer and Fisherman Income Averaging

    Science.gov (United States)

    2010-12-15

    ... to the averaging of farm and fishing income in computing income tax liability. The regulations...: PART 1--INCOME TAXES 0 Paragraph 1. The authority citation for part 1 continues to read in part as... section 1 tax would be increased if one-third of elected farm income were allocated to each year. The...

  17. The effects of average revenue regulation on electricity transmission investment and pricing

    International Nuclear Information System (INIS)

    Matsukawa, Isamu

    2008-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two-part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist's expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occurs, average revenue regulation is allocatively more efficient than a Coasian two-part tariff if the level of capacity under average revenue regulation is higher than that under a Coasian two-part tariff. (author)

  18. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  19. “Simpson’s paradox” as a manifestation of the properties of weighted average (part 2)

    OpenAIRE

    Zhekov, Encho

    2012-01-01

    The article proves that the so-called “Simpson's paradox” is a special case of manifestation of the properties of weighted average. In this case always comes to comparing two weighted averages, where the average of the larger variables is less than that of the smaller. The article demonstrates one method for analyzing the relative change of magnitudes of the type: k S = Σ x iy i i=1 who gives answer to question: what is the reason, the weighted average of few variables with higher values, to ...

  20. “Simpson’s paradox” as a manifestation of the properties of weighted average (part 1)

    OpenAIRE

    Zhekov, Encho

    2012-01-01

    The article proves that the so-called “Simpson's paradox” is a special case of manifestation of the properties of weighted average. In this case always comes to comparing two weighted averages, where the average of the larger variables is less than that of the smaller. The article demonstrates one method for analyzing the relative change of magnitudes of the type: S = Σ ki=1x iy i who gives answer to question: what is the reason, the weighted average of few variables with higher values, to be...

  1. Protocol for the estimation of average indoor radon-daughter concentrations: Second edition

    International Nuclear Information System (INIS)

    Langner, G.H. Jr.; Pacer, J.C.

    1988-05-01

    The Technical Measurements Center has developed a protocol which specifies the procedures to be used for determining indoor radon-daughter concentrations in support of Department of Energy remedial action programs. This document is the central part of the protocol and is to be used in conjunction with the individual procedure manuals. The manuals contain the information and procedures required to implement the proven methods for estimating average indoor radon-daughter concentration. Proven in this case means that these methods have been determined to provide reasonable assurance that the average radon-daughter concentration within a structure is either above, at, or below the standards established for remedial action programs. This document contains descriptions of the generic aspects of methods used for estimating radon-daughter concentration and provides guidance with respect to method selection for a given situation. It is expected that the latter section of this document will be revised whenever another estimation method is proven to be capable of satisfying the criteria of reasonable assurance and cost minimization. 22 refs., 6 figs., 3 tabs

  2. Anomalous behavior of q-averages in nonextensive statistical mechanics

    International Nuclear Information System (INIS)

    Abe, Sumiyoshi

    2009-01-01

    A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases

  3. Free Energy Self-Averaging in Protein-Sized Random Heteropolymers

    International Nuclear Information System (INIS)

    Chuang, Jeffrey; Grosberg, Alexander Yu.; Kardar, Mehran

    2001-01-01

    Current theories of heteropolymers are inherently macroscopic, but are applied to mesoscopic proteins. To compute the free energy over sequences, one assumes self-averaging -- a property established only in the macroscopic limit. By enumerating the states and energies of compact 18, 27, and 36mers on a lattice with an ensemble of random sequences, we test the self-averaging approximation. We find that fluctuations in the free energy between sequences are weak, and that self-averaging is valid at the scale of real proteins. The results validate sequence design methods which exponentially speed up computational design and simplify experimental realizations

  4. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  5. Density characteristics in the upper part of the platform of the Pripyatskiy Basin

    Energy Technology Data Exchange (ETDEWEB)

    Bulyga, V.K.; Anpilogov, A.P.; Ksenofontov, V.A.; Ur' yev, I.I.

    1981-01-01

    Density characteristics are examined for the Devonian (upper saline and suprasaline), Carboniferous, Permian, Mesozoic and Cenozoic deposits of the Pripyatskiy Basin. Maps are compiled for isodensities, variability is established in the average values of density both in a regional sense and on local elevations which are characterized for the most part by density minimums.

  6. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  7. Stochastic Averaging Principle for Spatial Birth-and-Death Evolutions in the Continuum

    Science.gov (United States)

    Friesen, Martin; Kondratiev, Yuri

    2018-06-01

    We study a spatial birth-and-death process on the phase space of locally finite configurations Γ^+ × Γ^- over R}^d. Dynamics is described by an non-equilibrium evolution of states obtained from the Fokker-Planck equation and associated with the Markov operator L^+(γ ^-) + 1/ɛ L^-, ɛ > 0. Here L^- describes the environment process on Γ^- and L^+(γ ^-) describes the system process on Γ^+, where γ ^- indicates that the corresponding birth-and-death rates depend on another locally finite configuration γ ^- \\in Γ^-. We prove that, for a certain class of birth-and-death rates, the corresponding Fokker-Planck equation is well-posed, i.e. there exists a unique evolution of states μ _t^{ɛ } on Γ^+ × Γ^-. Moreover, we give a sufficient condition such that the environment is ergodic with exponential rate. Let μ _{inv} be the invariant measure for the environment process on Γ^-. In the main part of this work we establish the stochastic averaging principle, i.e. we prove that the marginal of μ _t^{ɛ } onto Γ^+ converges weakly to an evolution of states on {Γ}^+ associated with the averaged Markov birth-and-death operator {\\overline{L}} = \\int _{Γ}^- L^+(γ ^-)d μ _{inv}(γ ^-).

  8. Stochastic Averaging Principle for Spatial Birth-and-Death Evolutions in the Continuum

    Science.gov (United States)

    Friesen, Martin; Kondratiev, Yuri

    2018-04-01

    We study a spatial birth-and-death process on the phase space of locally finite configurations Γ^+ × Γ^- over R^d . Dynamics is described by an non-equilibrium evolution of states obtained from the Fokker-Planck equation and associated with the Markov operator L^+(γ ^-) + 1/ɛ L^- , ɛ > 0 . Here L^- describes the environment process on Γ^- and L^+(γ ^-) describes the system process on Γ^+ , where γ ^- indicates that the corresponding birth-and-death rates depend on another locally finite configuration γ ^- \\in Γ^- . We prove that, for a certain class of birth-and-death rates, the corresponding Fokker-Planck equation is well-posed, i.e. there exists a unique evolution of states μ _t^{ɛ } on Γ^+ × Γ^- . Moreover, we give a sufficient condition such that the environment is ergodic with exponential rate. Let μ _{inv} be the invariant measure for the environment process on Γ^- . In the main part of this work we establish the stochastic averaging principle, i.e. we prove that the marginal of μ _t^{ɛ } onto Γ^+ converges weakly to an evolution of states on Γ^+ associated with the averaged Markov birth-and-death operator \\overline{L} = \\int _{Γ}^-}L^+(γ ^-)d μ _{inv}(γ ^-).

  9. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  10. Antibiotic-impregnated calcium phosphate cement as part of a comprehensive treatment for patients with established orthopaedic infection.

    Science.gov (United States)

    Niikura, Takahiro; Lee, Sang Yang; Iwakura, Takashi; Sakai, Yoshitada; Kuroda, Ryosuke; Kurosaka, Masahiro

    2016-07-01

    The treatment of established orthopaedic infection is challenging. While the main focus of treatment is wide surgical debridement, systemic and local antibiotic administration are important adjuvant therapies. Several reports have described the clinical use of antibiotic-impregnated calcium phosphate cement (CPC) to provide local antibiotic therapy for bone infections. However, these were all individual case reports, and no case series have been reported. We report a case series treated by a single surgeon using antibiotic-impregnated CPC as part of a comprehensive treatment plan in patients with established orthopaedic infection. We enrolled 13 consecutive patients with osteomyelitis (n = 6) or infected non-union (n = 7). Implantation of antibiotic-impregnated CPC was performed to provide local antibiotic therapy as part of a comprehensive treatment plan that also included wide surgical debridement, systemic antibiotic therapy, and subsequent second-stage reconstruction surgery. We investigated the rate of successful infection eradication and systemic/local complications. The concentration of antibiotics in the surgical drainage fluids, blood, and recovered CPC (via elution into a phosphate-buffered saline bath) were measured. The mean follow-up period after surgery was 50.4 (range, 27-73) months. There were no cases of infection recurrence during follow-up. No systemic toxicity or local complications from the implantation of antibiotic-impregnated CPC were observed. The vancomycin concentration in the fluid from surgical drainage (n = 6) was 527.1 ± 363.9 μg/mL on postoperative day 1 and 224.5 ± 198.4 μg/mL on postoperative day 2. In patients who did not receive systemic vancomycin therapy (n = 3), the maximum serum vancomycin level was antibiotic-impregnated CPC is an option to provide local antibiotic therapy as part of a comprehensive treatment plan. Copyright © 2016 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights

  11. The Value and Feasibility of Farming Differently Than the Local Average

    OpenAIRE

    Morris, Cooper; Dhuyvetter, Kevin; Yeager, Elizabeth A; Regier, Greg

    2018-01-01

    The purpose of this research is to quantify the value of being different than the local average and feasibility of distinguishing particular parts of an operation from the local average. Kansas crop farms are broken down by their farm characteristics, production practices, and management performances. An ordinary least squares regression model is used to quantify the value of having different than average characteristics, practices, and management performances. The degree farms have distingui...

  12. Coherence effects and average multiplicity in deep inelastic scattering at small χ

    International Nuclear Information System (INIS)

    Kisselev, A.V.; Petrov, V.A.

    1988-01-01

    The average hadron multiplicity in deep inelastic scattering at small χ is calculated in this paper. Its relationship with the average multiplicity in e + e - annihilation is established. As shown the results do not depend on a choice of the gauge vector. The important role of coherence effects in both space-like and time-like jet evolution is clarified. (orig.)

  13. Scale-invariant Green-Kubo relation for time-averaged diffusivity

    Science.gov (United States)

    Meyer, Philipp; Barkai, Eli; Kantz, Holger

    2017-12-01

    In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.

  14. Time-dependence and averaging techniques in atomic photoionization calculations

    International Nuclear Information System (INIS)

    Scheibner, K.F.

    1984-01-01

    Two distinct problems in the development and application of averaging techniques to photoionization calculations are considered. The first part of the thesis is concerned with the specific problem of near-resonant three-photon ionization in hydrogen, a process for which no cross section exists. Effects of the inclusion of the laser pulse characteristics (both temporal and spatial) on the dynamics of the ionization probability and of the metastable state probability are examined. It is found, for example, that the ionization probability can decrease with increasing field intensity. The temporal profile of the laser pulse is found to affect the dynamics very little, whereas the spatial character of the pulse can affect the results drastically. In the second part of the thesis techniques are developed for calculating averaged cross sections directly without first calculating a detailed cross section. Techniques are developed whereby the detailed cross section never has to be calculated as an intermediate step, but rather, the averaged cross section is calculated directly. A variation of the moment technique and a new method based on the stabilization technique are applied successfully to atomic hydrogen and helium

  15. A spare-parts inventory program for TRIGA reactors

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, T V; Ringle, J C; Johnson, A G [Oregon State University (United States)

    1974-07-01

    As is fairly common with new reactor facilities, we had a few spare parts on hand as part of our original purchase when the OSU TRIGA first went critical in March of 1967. Within a year or so, however, it became apparent that we should critically examine our spare parts inventory in order to avoid unnecessary or prolonged outages due to lack of a crucial piece of equipment. Many critical components (those which must be present and operable according to our license or technical specifications) were considered, and a priority list of acquiring these was established. This first list was drawn up in March, 1969, two years after initial criticality, and some key components were ordered. The availability of funds was the overriding restriction then and now. This spare-parts list is reviewed and new components purchased annually; the average amount spent has been about $2,000 per year. This inventory has proved invaluable more than once; without it, we would have had lengthy shutdowns awaiting the arrival of the needed component. The sobering thought, however, is that our spare-parts inventory is still not complete-far from it, in fact, because this would be prohibitively expensive. It is very difficult to guess with 100% accuracy just which component might need replacing, and your $10,000 inventory of spare parts is useless in that instance if it doesn't include the needed part. An idea worth considering is to either (a) encourage General Atomic, through the collective voice of all TRIGA owners, to maintain a rather complete inventory of replacement parts, or (b) maintain an owner's spare-parts pool, financed by contributions from all the facilities. If either of these pools was established, the needed part could reach any facility within the U.S. within a few days, minimizing reactor outage time. (author)

  16. A spare-parts inventory program for TRIGA reactors

    International Nuclear Information System (INIS)

    Anderson, T.V.; Ringle, J.C.; Johnson, A.G.

    1974-01-01

    As is fairly common with new reactor facilities, we had a few spare parts on hand as part of our original purchase when the OSU TRIGA first went critical in March of 1967. Within a year or so, however, it became apparent that we should critically examine our spare parts inventory in order to avoid unnecessary or prolonged outages due to lack of a crucial piece of equipment. Many critical components (those which must be present and operable according to our license or technical specifications) were considered, and a priority list of acquiring these was established. This first list was drawn up in March, 1969, two years after initial criticality, and some key components were ordered. The availability of funds was the overriding restriction then and now. This spare-parts list is reviewed and new components purchased annually; the average amount spent has been about $2,000 per year. This inventory has proved invaluable more than once; without it, we would have had lengthy shutdowns awaiting the arrival of the needed component. The sobering thought, however, is that our spare-parts inventory is still not complete-far from it, in fact, because this would be prohibitively expensive. It is very difficult to guess with 100% accuracy just which component might need replacing, and your $10,000 inventory of spare parts is useless in that instance if it doesn't include the needed part. An idea worth considering is to either (a) encourage General Atomic, through the collective voice of all TRIGA owners, to maintain a rather complete inventory of replacement parts, or (b) maintain an owner's spare-parts pool, financed by contributions from all the facilities. If either of these pools was established, the needed part could reach any facility within the U.S. within a few days, minimizing reactor outage time. (author)

  17. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  18. 49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.

    Science.gov (United States)

    2010-10-01

    ... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year does...

  19. Dental establishment business activity in California counties at the start of the millennium.

    Science.gov (United States)

    Waldman, H Barry

    2006-05-01

    The Bureau of the Census reports for 2002 were used to develop business data for "average" dental establishments in each of the counties in California. On average, between 1997 and 2002, when compared to national information, the number of California statewide dental establishments increased at a greater rate, had a smaller resident population per establishment, reported lower gross receipts, had fewer employees, and paid lower salaries to employees.

  20. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  1. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: Analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe

    Energy Technology Data Exchange (ETDEWEB)

    Prevosto, L.; Mancinelli, B. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Kelly, H. [Grupo de Descargas Eléctricas, Departamento Ing. Electromecánica, Facultad Regional Venado Tuerto (UTN), Laprida 651, Venado Tuerto (2600) Santa Fe (Argentina); Instituto de Física del Plasma (CONICET), Departamento de Física, Facultad de Ciencias Exactas y Naturales (UBA) Ciudad Universitaria Pab. I, 1428 Buenos Aires (Argentina)

    2013-12-15

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

  2. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: Analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe

    International Nuclear Information System (INIS)

    Prevosto, L.; Mancinelli, B.; Kelly, H.

    2013-01-01

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core

  3. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe.

    Science.gov (United States)

    Prevosto, L; Kelly, H; Mancinelli, B

    2013-12-01

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

  4. Attractiveness of the female body: Preference for the average or the supernormal?

    Directory of Open Access Journals (Sweden)

    Marković Slobodan

    2017-01-01

    Full Text Available The main purpose of the present study was to contrast the two hypotheses of female body attractiveness. The first is the “preference-for-the average” hypothesis: the most attractive female body is the one that represents the average body proportions for a given population. The second is the “preference-for-the supernormal” hypothesis: according to the so-called “peak shift effect”, the most attractive female body is more feminine than the average. We investigated the preference for three female body characteristics: waist to hip ratio (WHR, buttocks and breasts. There were 456 participants of both genders. Using a program for computer animation (DAZ 3D three sets of stimuli were generated (WHR, buttocks and breasts. Each set included six stimuli ranked from the lowest to the highest femininity level. Participants were asked to choose the stimulus within each set which they found most attractive (task 1 and average (task 2. One group of participants judged the body parts that were presented in the global context (whole body, while the other group judged the stimuli in the local context (isolated body parts only. Analyses have shown that the most attractive WHR, buttocks and breasts are more feminine (meaning smaller for WHR and larger for breasts and buttocks than average ones, for both genders and in both presentation contexts. The effect of gender was obtained only for the most attractive breasts: males prefer larger breasts than females. Finally, most attractive and average WHR and breasts were less feminine in the local than in the global context. These results support the preference-for the supernormal hypothesis: all analyses have shown that both male and female participants preferred female body parts which are more feminine than those judged average. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. 179033

  5. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  6. Dental establishment business activity in New York State counties at start of the millennium.

    Science.gov (United States)

    Waldman, H Barry

    2006-01-01

    Bureau of the Census reports for 2002 were used to develop business data for "average" dental establishments in each of the counties in New York State. On average, between 1997 and 2002, when compared to national information, the number of New York State dental establishments increased at a slower rate, had a smaller resident population per establishment, reported lower gross receipts, had fewer employees and paid lower salaries to employees.

  7. METHODS OF CONTROLLING THE AVERAGE DIAMETER OF THE THREAD WITH ASYMMETRICAL PROFILE

    Directory of Open Access Journals (Sweden)

    L. M. Aliomarov

    2015-01-01

    Full Text Available To handle the threaded holes in hard materials made of marine machinery, operating at high temperatures, heavy loads and in aggressive environments, the authors have developed the combined tool core drill -tap with a special cutting scheme, which has an asymmetric thread profile on the tap part. In order to control the average diameter of the thread of tap part of the combined tool was used the method three wires, which allows to make continuous measurement of the average diameter of the thread along the entire profile. Deviation from the average diameter from the sample is registered by inductive sensor and is recorded by the recorder. In the work are developed and presented control schemes of the average diameter of the threads with a symmetrical and asymmetrical profile. On the basis of these schemes are derived formulas for calculating the theoretical option to set the wires in the thread profile in the process of measuring the average diameter. Conducted complex research and the introduction of the combined instrument core drill-tap in the production of products of marine engineering, shipbuilding, ship repair power plants made of hard materials showed a high efficiency of the proposed technology for the processing of high-quality small-diameter threaded holes that meet modern requirements.

  8. Extracurricular Activities and Their Effect on the Student's Grade Point Average: Statistical Study

    Science.gov (United States)

    Bakoban, R. A.; Aljarallah, S. A.

    2015-01-01

    Extracurricular activities (ECA) are part of students' everyday life; they play important roles in students' lives. Few studies have addressed the question of how student engagements to ECA affect student's grade point average (GPA). This research was conducted to know whether the students' grade point average in King Abdulaziz University,…

  9. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  10. Identification of Large-Scale Structure Fluctuations in IC Engines using POD-Based Conditional Averaging

    Directory of Open Access Journals (Sweden)

    Buhl Stefan

    2016-01-01

    Full Text Available Cycle-to-Cycle Variations (CCV in IC engines is a well-known phenomenon and the definition and quantification is well-established for global quantities such as the mean pressure. On the other hand, the definition of CCV for local quantities, e.g. the velocity or the mixture distribution, is less straightforward. This paper proposes a new method to identify and calculate cyclic variations of the flow field in IC engines emphasizing the different contributions from large-scale energetic (coherent structures, identified by a combination of Proper Orthogonal Decomposition (POD and conditional averaging, and small-scale fluctuations. Suitable subsets required for the conditional averaging are derived from combinations of the the POD coefficients of the second and third mode. Within each subset, the velocity is averaged and these averages are compared to the ensemble-averaged velocity field, which is based on all cycles. The resulting difference of the subset-average and the global-average is identified as a cyclic fluctuation of the coherent structures. Then, within each subset, remaining fluctuations are obtained from the difference between the instantaneous fields and the corresponding subset average. The proposed methodology is tested for two data sets obtained from scale resolving engine simulations. For the first test case, the numerical database consists of 208 independent samples of a simplified engine geometry. For the second case, 120 cycles for the well-established Transparent Combustion Chamber (TCC benchmark engine are considered. For both applications, the suitability of the method to identify the two contributions to CCV is discussed and the results are directly linked to the observed flow field structures.

  11. Justification of the averaging method for parabolic equations containing rapidly oscillating terms with large amplitudes

    International Nuclear Information System (INIS)

    Levenshtam, V B

    2006-01-01

    We justify the averaging method for abstract parabolic equations with stationary principal part that contain non-linearities (subordinate to the principal part) some of whose terms are rapidly oscillating in time with zero mean and are proportional to the square root of the frequency of oscillation. Our interest in the exponent 1/2 is motivated by the fact that terms proportional to lower powers of the frequency have no influence on the average. For linear equations of the same type, we justify an algorithm for the study of the stability of solutions in the case when the stationary averaged problem has eigenvalues on the imaginary axis (the critical case)

  12. MCBS Highlights: Ownership and Average Premiums for Medicare Supplementary Insurance Policies

    Science.gov (United States)

    Chulis, George S.; Eppig, Franklin J.; Poisal, John A.

    1995-01-01

    This article describes private supplementary health insurance holdings and average premiums paid by Medicare enrollees. Data were collected as part of the 1992 Medicare Current Beneficiary Survey (MCBS). Data show the number of persons with insurance and average premiums paid by type of insurance held—individually purchased policies, employer-sponsored policies, or both. Distributions are shown for a variety of demographic, socioeconomic, and health status variables. Primary findings include: Seventy-eight percent of Medicare beneficiaries have private supplementary insurance; 25 percent of those with private insurance hold more than one policy. The average premium paid for private insurance in 1992 was $914. PMID:10153473

  13. The Effect of Honors Courses on Grade Point Averages

    Science.gov (United States)

    Spisak, Art L.; Squires, Suzanne Carter

    2016-01-01

    High-ability entering college students give three main reasons for not choosing to become part of honors programs and colleges; they and/or their parents believe that honors classes at the university level require more work than non-honors courses, are more stressful, and will adversely affect their self-image and grade point average (GPA) (Hill;…

  14. Establishment of dose reference levels for mammography in Greece

    International Nuclear Information System (INIS)

    Kalathaki, M.; Hourdakis, C.J.; Economides, S.; Tritakis, P.; Manousaridis, G.; Kalyvas, N.; Simantirakis, G.; Kipouros, P.; Kamenopoulou, V.

    2006-01-01

    Full text of publication follows: Diagnostic Reference Levels (D.R.L.) are dose levels established in medical practices for typical x-ray examinations concerning groups of standard size patients or standard phantoms and broadly defined types of equipment. When good and normal practice is performed, these levels are not expected to be exceeded. This work is an attempt to establish for the first time the D.R.L. for mammography in Greece. At present, there are 402 mammographic systems in clinical use all over the country. This study that lasted 3 years (2000-2003), includes 117 of these systems, 85% of which are installed in private and 15% in public sector countrywide. Measurements of entrance surface dose (E.S.D.) were performed as a part of the regular inspections performed by the Licensing and Inspections Department of Greek Atomic Energy Commission on the basis of the laboratories licensing procedure. Moreover, the entire performance of the mammographic units was assessed by quantitative and qualitative measurements of specific parameters. In order to establish the national D.R.L., a standard phantom was used during the quality control of the mammographic units and E.S.D. measurements were performed based on the clinical practice of each laboratory. The D.R.L. for this type of examination was established according to the 75. percentile of the E.S.D. curve and found equal to 7 mGy per single view. The comparison of this value with the one reported by the European Commission (10 mGy per view), indicates that the D.R.L. for mammography is lower in Greece. However, the primary concern of a mammographic examination is to keep breast dose as low as reasonably achievable while providing images with the maximum amount of diagnostic information. The quality of the produced images was therefore assessed for all systems examined, regardless of meeting or exceeding the quality criteria reference surface entrance dose. The results showed that the average total score of the

  15. A Group Neighborhood Average Clock Synchronization Protocol for Wireless Sensor Networks

    Science.gov (United States)

    Lin, Lin; Ma, Shiwei; Ma, Maode

    2014-01-01

    Clock synchronization is a very important issue for the applications of wireless sensor networks. The sensors need to keep a strict clock so that users can know exactly what happens in the monitoring area at the same time. This paper proposes a novel internal distributed clock synchronization solution using group neighborhood average. Each sensor node collects the offset and skew rate of the neighbors. Group averaging of offset and skew rate value are calculated instead of conventional point-to-point averaging method. The sensor node then returns compensated value back to the neighbors. The propagation delay is considered and compensated. The analytical analysis of offset and skew compensation is presented. Simulation results validate the effectiveness of the protocol and reveal that the protocol allows sensor networks to quickly establish a consensus clock and maintain a small deviation from the consensus clock. PMID:25120163

  16. Averages of B-Hadron, C-Hadron, and tau-lepton properties as of early 2012

    Energy Technology Data Exchange (ETDEWEB)

    Amhis, Y.; et al.

    2012-07-01

    This article reports world averages of measurements of b-hadron, c-hadron, and tau-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through the end of 2011. In some cases results available in the early part of 2012 are included. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, CP violation parameters, parameters of semileptonic decays and CKM matrix elements.

  17. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  18. Effect of temporal averaging of meteorological data on predictions of groundwater recharge

    Directory of Open Access Journals (Sweden)

    Batalha Marcia S.

    2018-06-01

    Full Text Available Accurate estimates of infiltration and groundwater recharge are critical for many hydrologic, agricultural and environmental applications. Anticipated climate change in many regions of the world, especially in tropical areas, is expected to increase the frequency of high-intensity, short-duration precipitation events, which in turn will affect the groundwater recharge rate. Estimates of recharge are often obtained using monthly or even annually averaged meteorological time series data. In this study we employed the HYDRUS-1D software package to assess the sensitivity of groundwater recharge calculations to using meteorological time series of different temporal resolutions (i.e., hourly, daily, weekly, monthly and yearly averaged precipitation and potential evaporation rates. Calculations were applied to three sites in Brazil having different climatological conditions: a tropical savanna (the Cerrado, a humid subtropical area (the temperate southern part of Brazil, and a very wet tropical area (Amazonia. To simplify our current analysis, we did not consider any land use effects by ignoring root water uptake. Temporal averaging of meteorological data was found to lead to significant bias in predictions of groundwater recharge, with much greater estimated recharge rates in case of very uneven temporal rainfall distributions during the year involving distinct wet and dry seasons. For example, at the Cerrado site, using daily averaged data produced recharge rates of up to 9 times greater than using yearly averaged data. In all cases, an increase in the time of averaging of meteorological data led to lower estimates of groundwater recharge, especially at sites having coarse-textured soils. Our results show that temporal averaging limits the ability of simulations to predict deep penetration of moisture in response to precipitation, so that water remains in the upper part of the vadose zone subject to upward flow and evaporation.

  19. The Effects of Average Revenue Regulation on Electricity Transmission Investment and Pricing

    OpenAIRE

    Isamu Matsukawa

    2005-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two- part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist fs expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occur...

  20. Establishment of gold-quartz standard GQS-1

    Science.gov (United States)

    Millard, Hugh T.; Marinenko, John; McLane, John E.

    1969-01-01

    A homogeneous gold-quartz standard, GQS-1, was prepared from a heterogeneous gold-bearing quartz by chemical treatment. The concentration of gold in GQS-1 was determined by both instrumental neutron activation analysis and radioisotope dilution analysis to be 2.61?0.10 parts per million. Analysis of 10 samples of the standard by both instrumental neutron activation analysis and radioisotope dilution analysis failed to reveal heterogeneity within the standard. The precision of the analytical methods, expressed as standard error, was approximately 0.1 part per million. The analytical data were also used to estimate the average size of gold particles. The chemical treatment apparently reduced the average diameter of the gold particles by at least an order of magnitude and increased the concentration of gold grains by a factor of at least 4,000.

  1. A Group Neighborhood Average Clock Synchronization Protocol for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Lin Lin

    2014-08-01

    Full Text Available Clock synchronization is a very important issue for the applications of wireless sensor networks. The sensors need to keep a strict clock so that users can know exactly what happens in the monitoring area at the same time. This paper proposes a novel internal distributed clock synchronization solution using group neighborhood average. Each sensor node collects the offset and skew rate of the neighbors. Group averaging of offset and skew rate value are calculated instead of conventional point-to-point averaging method. The sensor node then returns compensated value back to the neighbors. The propagation delay is considered and compensated. The analytical analysis of offset and skew compensation is presented. Simulation results validate the effectiveness of the protocol and reveal that the protocol allows sensor networks to quickly establish a consensus clock and maintain a small deviation from the consensus clock.

  2. Establishment and Development of Business Incubators

    OpenAIRE

    Teliceanu Claudiu Daniel

    2010-01-01

    Business incubators are established in order to help new businesses to consolidate and, consequently, to lead to creation of new jobs as part of a strategic framework - territorial oriented, or on a particular policy priority or a combination of these factors. The business incubator supports its customers to overcome the legislative, administrative barriers and thus to start much easier a business, by facilitating the business establishment process and their access to community support network.

  3. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  4. Resident characterization of better-than- and worse-than-average clinical teaching.

    Science.gov (United States)

    Haydar, Bishr; Charnin, Jonathan; Voepel-Lewis, Terri; Baker, Keith

    2014-01-01

    Clinical teachers and trainees share a common view of what constitutes excellent clinical teaching, but associations between these behaviors and high teaching scores have not been established. This study used residents' written feedback to their clinical teachers, to identify themes associated with above- or below-average teaching scores. All resident evaluations of their clinical supervisors in a single department were collected from January 1, 2007 until December 31, 2008. A mean teaching score assigned by each resident was calculated. Evaluations that were 20% higher or 15% lower than the resident's mean score were used. A subset of these evaluations was reviewed, generating a list of 28 themes for further study. Two researchers then, independently coded the presence or absence of these themes in each evaluation. Interrater reliability of the themes and logistic regression were used to evaluate the predictive associations of the themes with above- or below-average evaluations. Five hundred twenty-seven above-average and 285 below-average evaluations were evaluated for the presence or absence of 15 positive themes and 13 negative themes, which were divided into four categories: teaching, supervision, interpersonal, and feedback. Thirteen of 15 positive themes correlated with above-average evaluations and nine had high interrater reliability (Intraclass Correlation Coefficient >0.6). Twelve of 13 negative themes correlated with below-average evaluations, and all had high interrater reliability. On the basis of these findings, the authors developed 13 recommendations for clinical educators. The authors developed 13 recommendations for clinical teachers using the themes identified from the above- and below-average clinical teaching evaluations submitted by anesthesia residents.

  5. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  6. Scale dependence of the average potential around the maximum in Φ4 theories

    International Nuclear Information System (INIS)

    Tetradis, N.; Wetterich, C.

    1992-04-01

    The average potential describes the physics at a length scale k - 1 by averaging out the degrees of freedom with characteristic moments larger than k. The dependence on k can be described by differential evolution equations. We solve these equations for the nonconvex part of the potential around the origin in φ 4 theories, in the phase with spontaneous symmetry breaking. The average potential is real and approaches the convex effective potential in the limit k → 0. Our calculation is relevant for processes for which the shape of the potential at a given scale is important, such as tunneling phenomena or inflation. (orig.)

  7. Moisture availability limits subalpine tree establishment.

    Science.gov (United States)

    Andrus, Robert A; Harvey, Brian J; Rodman, Kyle C; Hart, Sarah J; Veblen, Thomas T

    2018-03-01

    In the absence of broad-scale disturbance, many temperate coniferous forests experience successful seedling establishment only when abundant seed production coincides with favorable climate. Identifying the frequency of past establishment events and the climate conditions favorable for seedling establishment is essential to understanding how climate warming could affect the frequency of future tree establishment events and therefore future forest composition or even persistence of a forest cover. In the southern Rocky Mountains, USA, research on the sensitivity of establishment of Engelmann spruce (Picea engelmannii) and subalpine fir (Abies lasiocarpa)-two widely distributed, co-occurring conifers in North America-to climate variability has focused on the alpine treeline ecotone, leaving uncertainty about the sensitivity of these species across much of their elevation distribution. We compared annual germination dates for >450 Engelmann spruce and >500 subalpine fir seedlings collected across a complex topographic-moisture gradient to climate variability in the Colorado Front Range. We found that Engelmann spruce and subalpine fir established episodically with strong synchrony in establishment events across the study area. Broad-scale establishment events occurred in years of high soil moisture availability, which were characterized by above-average snowpack and/or cool and wet summer climatic conditions. In the recent half of the study period (1975-2010), a decrease in the number of fir and spruce establishment events across their distribution coincided with declining snowpack and a multi-decadal trend of rising summer temperature and increasing moisture deficits. Counter to expected and observed increases in tree establishment with climate warming in maritime subalpine forests, our results show that recruitment declines will likely occur across the core of moisture-limited subalpine tree ranges as warming drives increased moisture deficits. © 2018 by the

  8. Preference for Averageness in Faces Does Not Generalize to Non-Human Primates

    Directory of Open Access Journals (Sweden)

    Olivia B. Tomeo

    2017-07-01

    Full Text Available Facial attractiveness is a long-standing topic of active study in both neuroscience and social science, motivated by its positive social consequences. Over the past few decades, it has been established that averageness is a major factor influencing judgments of facial attractiveness in humans. Non-human primates share similar social behaviors as well as neural mechanisms related to face processing with humans. However, it is unknown whether monkeys, like humans, also find particular faces attractive and, if so, which kind of facial traits they prefer. To address these questions, we investigated the effect of averageness on preferences for faces in monkeys. We tested three adult male rhesus macaques using a visual paired comparison (VPC task, in which they viewed pairs of faces (both individual faces, or one individual face and one average face; viewing time was used as a measure of preference. We did find that monkeys looked longer at certain individual faces than others. However, unlike humans, monkeys did not prefer the average face over individual faces. In fact, the more the individual face differed from the average face, the longer the monkeys looked at it, indicating that the average face likely plays a role in face recognition rather than in judgments of facial attractiveness: in models of face recognition, the average face operates as the norm against which individual faces are compared and recognized. Taken together, our study suggests that the preference for averageness in faces does not generalize to non-human primates.

  9. How well can online GPS PPP post-processing services be used to establish geodetic survey control networks?

    Science.gov (United States)

    Ebner, R.; Featherstone, W. E.

    2008-09-01

    Establishing geodetic control networks for subsequent surveys can be a costly business, even when using GPS. Multiple stations should be occupied simultaneously and post-processed with scientific software. However, the free availability of online GPS precise point positioning (PPP) post-processing services offer the opportunity to establish a whole geodetic control network with just one dual-frequency receiver and one field crew. To test this idea, we compared coordinates from a moderate-sized (~550 km by ~440 km) geodetic network of 46 points over part of south-western Western Australia, which were processed both with the Bernese v5 scientific software and with the CSRS (Canadian Spatial Reference System) PPP free online service. After rejection of five stations where the antenna type was not recognised by CSRS, the PPP solutions agreed on average with the Bernese solutions to 3.3 mm in east, 4.8 mm in north and 11.8 mm in height. The average standard deviations of the Bernese solutions were 1.0 mm in east, 1.2 mm in north and 6.2 mm in height, whereas for CSRS they were 3.9 mm in east, 1.9 mm in north and 7.8 mm in height, reflecting the inherently lower precision of PPP. However, at the 99% confidence level, only one CSRS solution was statistically different to the Bernese solution in the north component, due to a data interruption at that site. Nevertheless, PPP can still be used to establish geodetic survey control, albeit with a slightly lower quality because of the larger standard deviations. This approach may be of particular benefit in developing countries or remote regions, where geodetic infrastructure is sparse and would not normally be established without this approach.

  10. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  11. Establishment of Filipino standard man

    International Nuclear Information System (INIS)

    Natera, E.; San Jose, V.; Napenas, D.

    1984-01-01

    The initial data gathered on measurements of total body weight and weights of specific organs from autopsy cases of normal Filipinos are reported. Comparison of the above data with the Reference Man data of ICRP which was based primarily on Caucasians suggests some differences in the average weight and height of whole body and in the weights of some organs. Hence there appears to be a need for the establishment of Filipino standard man which can be used in the estimation of internal dose commitment of the Filipinos. (author)

  12. Establishment of Filipino standard man

    Energy Technology Data Exchange (ETDEWEB)

    Natera, E.; San Jose, V.; Napenas, D.

    The initial data gathered on measurements of total body weight and weights of specific organs from autopsy cases of normal Filipinos are reported. Comparison of the above data with the Reference Man data of ICRP which was based primarily on Caucasians suggests some differences in the average weight and height of whole body and in the weights of some organs. Hence there appears to be a need for the establishment of Filipino standard man which can be used in the estimation of internal dose commitment of the Filipinos.

  13. Fourier analysis of spherically averaged momentum densities for some gaseous molecules

    International Nuclear Information System (INIS)

    Tossel, J.A.; Moore, J.H.

    1981-01-01

    The spherically averaged autocorrelation function, B(r), of the position-space wavefunction, psi(anti r), is calculated by numerical Fourier transformation from spherically averaged momentum densities, rho(p), obtained from either theoretical wavefunctions or (e,2e) electron-impact ionization experiments. Inspection of B(r) for the π molecular orbitals of C 4 H 6 established that autocorrelation function differences, ΔB(r), can be qualitatively related to bond lengths and numbers of bonding interactions. Differences between B(r) functions obtained from different approximate wavefunctions for a given orbital can be qualitatively understood in terms of wavefunction difference, Δpsi(1anti r), maps for these orbitals. Comparison of the B(r) function for the 1αsub(u) orbital of C 4 H 6 obtained from (e,2e) momentum densities with that obtained from an ab initio SCF MO wavefunction shows differences consistent with expected correlation effects. Thus, B(r) appears to be a useful quantity for relating spherically averaged momentum distributions to position-space wavefunction differences. (orig.)

  14. Facial averageness and genetic quality: Testing heritability, genetic correlation with attractiveness, and the paternal age effect.

    Science.gov (United States)

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2016-01-01

    Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample ( N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.

  15. 49 CFR Appendix C to Part 222 - Guide to Establishing Quiet Zones

    Science.gov (United States)

    2010-10-01

    ... authority. FRA believes that it will be very useful to include these organizations in the planning process... implementation process. This section also discusses Partial (e.g. night time only quiet zones) and Intermediate... provides four basic ways in which a quiet zone may be established. Creation of both New Quiet Zones and Pre...

  16. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  17. Applications of ordered weighted averaging (OWA operators in environmental problems

    Directory of Open Access Journals (Sweden)

    Carlos Llopis-Albert

    2017-04-01

    Full Text Available This paper presents an application of a prioritized weighted aggregation operator based on ordered weighted averaging (OWA to deal with stakeholders' constructive participation in water resources projects. They have different degree of acceptance or preference regarding the measures and policies to be carried out, which lead to different environmental and socio-economic outcomes, and hence, to different levels of stakeholders’ satisfaction. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology has been successfully applied to a Public Participation Project (PPP in watershed management, thus obtaining efficient environmental measures in conflict resolution problems under actors’ preference uncertainties.

  18. 76 FR 39038 - Proposed Establishment of Class E Airspace; Lebanon, PA

    Science.gov (United States)

    2011-07-05

    ...-0558; Airspace Docket No. 11-AEA-13] Proposed Establishment of Class E Airspace; Lebanon, PA AGENCY... action proposes to establish Class E Airspace at Lebanon, PA, to accommodate new Standard Instrument... amendment to Title 14, Code of Federal Regulations (14 CFR) part 71 to establish Class E airspace at Lebanon...

  19. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  20. [Study on standardization of cupping technique: elucidation on the establishment of the National Standard Standardized Manipulation of Acupuncture and Moxibustion, Part V, Cupping].

    Science.gov (United States)

    Gao, Shu-zhong; Liu, Bing

    2010-02-01

    From the aspects of basis, technique descriptions, core contents, problems and solutions, and standard thinking in standard setting process, this paper states experiences in the establishment of the national standard Standardized Manipulation of Acupuncture and Moxibustion, Part V, Cupping, focusing on methodologies used in cupping standard setting process, the method selection and operating instructions of cupping standardization, and the characteristics of standard TCM. In addition, this paper states the scope of application, and precautions for this cupping standardization. This paper also explaines tentative ideas on the research of standardized manipulation of acupuncture and moxibustion.

  1. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  2. Path-average rainfall estimation from optical extinction measurements using a large-aperture scintillometer

    NARCIS (Netherlands)

    Uijlenhoet, R.; Cohard, J.M.; Gosset, M.

    2011-01-01

    The potential of a near-infrared large-aperture boundary layer scintillometer as path-average rain gauge is investigated. The instrument was installed over a 2.4-km path in Benin as part of the African Monsoon Multidisciplinary Analysis (AMMA) Enhanced Observation Period during 2006 and 2007.

  3. Analysis of litter size and average litter weight in pigs using a recursive model

    DEFF Research Database (Denmark)

    Varona, Luis; Sorensen, Daniel; Thompson, Robin

    2007-01-01

    An analysis of litter size and average piglet weight at birth in Landrace and Yorkshire using a standard two-trait mixed model (SMM) and a recursive mixed model (RMM) is presented. The RMM establishes a one-way link from litter size to average piglet weight. It is shown that there is a one......-to-one correspondence between the parameters of SMM and RMM and that they generate equivalent likelihoods. As parameterized in this work, the RMM tests for the presence of a recursive relationship between additive genetic values, permanent environmental effects, and specific environmental effects of litter size......, on average piglet weight. The equivalent standard mixed model tests whether or not the covariance matrices of the random effects have a diagonal structure. In Landrace, posterior predictive model checking supports a model without any form of recursion or, alternatively, a SMM with diagonal covariance...

  4. Measured emotional intelligence ability and grade point average in nursing students.

    Science.gov (United States)

    Codier, Estelle; Odell, Ellen

    2014-04-01

    For most schools of nursing, grade point average is the most important criteria for admission to nursing school and constitutes the main indicator of success throughout the nursing program. In the general research literature, the relationship between traditional measures of academic success, such as grade point average and postgraduation job performance is not well established. In both the general population and among practicing nurses, measured emotional intelligence ability correlates with both performance and other important professional indicators postgraduation. Little research exists comparing traditional measures of intelligence with measured emotional intelligence prior to graduation, and none in the student nurse population. This exploratory, descriptive, quantitative study was undertaken to explore the relationship between measured emotional intelligence ability and grade point average of first year nursing students. The study took place at a school of nursing at a university in the south central region of the United States. Participants included 72 undergraduate student nurse volunteers. Emotional intelligence was measured using the Mayer-Salovey-Caruso Emotional Intelligence Test, version 2, an instrument for quantifying emotional intelligence ability. Pre-admission grade point average was reported by the school records department. Total emotional intelligence (r=.24) scores and one subscore, experiential emotional intelligence(r=.25) correlated significantly (>.05) with grade point average. This exploratory, descriptive study provided evidence for some relationship between GPA and measured emotional intelligence ability, but also demonstrated lower than average range scores in several emotional intelligence scores. The relationship between pre-graduation measures of success and level of performance postgraduation deserves further exploration. The findings of this study suggest that research on the relationship between traditional and nontraditional

  5. Establish radiation protection programme for diagnostic radiology

    International Nuclear Information System (INIS)

    Mboya, G.

    2014-01-01

    Mammography is an effective method used for breast diagnostics and screening. The aim of this project is to review the literature on how to establish radiation protection programme for mammography in order to protect the patients, the occupationally exposed workers and the members of the public from harmful effects of ionizing radiation. It reviews some of the trends in mammography doses and dosimetric principles such as average glandular dose in the glandular tissue which is used for description of radiation risk, also the factors affecting patient doses are discussed. However, the average glandular dose should not be used directly to estimate the radiation risk from mammography. Risk is calculated under certain assumptions from determined entrance surface air kerma. Given the increase in population dose, emphasis is placed on the justification and optimization of the mammographic procedures. Protection is optimized by the radiation dose being appropriate with the purpose of the mammographic examination. The need to establish diagnostic reference levels as an optimization is also discussed. In order to obtain high quality mammograms at low dose to the breast, it is necessary to use the correct equipment and perform periodic quality control tests on mammography equipment. It is noted that in order to achieve the goal of this project, the application of radiation protection should begin at the time of requesting for mammography examination, positioning of the patient, irradiation, image processing and interpretation of mammogram. It is recommended that close cooperation between radiology technologists, radiologist, medical physicists, regulatory authority and other support workers be required and established to obtain a consistent and effective level of radiation protection in a mammography facility. (author)

  6. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  7. Position-Dependent Dynamics Explain Pore-Averaged Diffusion in Strongly Attractive Adsorptive Systems.

    Science.gov (United States)

    Krekelberg, William P; Siderius, Daniel W; Shen, Vincent K; Truskett, Thomas M; Errington, Jeffrey R

    2017-12-12

    Using molecular simulations, we investigate the relationship between the pore-averaged and position-dependent self-diffusivity of a fluid adsorbed in a strongly attractive pore as a function of loading. Previous work (Krekelberg, W. P.; Siderius, D. W.; Shen, V. K.; Truskett, T. M.; Errington, J. R. Connection between thermodynamics and dynamics of simple fluids in highly attractive pores. Langmuir 2013, 29, 14527-14535, doi: 10.1021/la4037327) established that pore-averaged self-diffusivity in the multilayer adsorption regime, where the fluid exhibits a dense film at the pore surface and a lower density interior pore region, is nearly constant as a function of loading. Here we show that this puzzling behavior can be understood in terms of how loading affects the fraction of particles that reside in the film and interior pore regions as well as their distinct dynamics. Specifically, the insensitivity of pore-averaged diffusivity to loading arises from the approximate cancellation of two factors: an increase in the fraction of particles in the higher diffusivity interior pore region with loading and a corresponding decrease in the particle diffusivity in that region. We also find that the position-dependent self-diffusivities scale with the position-dependent density. We present a model for predicting the pore-average self-diffusivity based on the position-dependent self-diffusivity, which captures the unusual characteristics of pore-averaged self-diffusivity in strongly attractive pores over several orders of magnitude.

  8. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    Science.gov (United States)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  9. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.

    Directory of Open Access Journals (Sweden)

    Jacinta Chan Phooi M'ng

    Full Text Available The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR, Malaysia (MYR, the Philippines (PHP, Singapore (SGD, and Thailand (THB through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA' in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.

  10. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.

    Science.gov (United States)

    Chan Phooi M'ng, Jacinta; Zainudin, Rozaimah

    2016-01-01

    The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA') in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.

  11. RSS SSM/I OCEAN PRODUCT GRIDS MONTHLY AVERAGE FROM DMSP F15 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Monthly Average from DMSP F15 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special...

  12. Impact of connected vehicle guidance information on network-wide average travel time

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2016-12-01

    Full Text Available With the emergence of connected vehicle technologies, the potential positive impact of connected vehicle guidance on mobility has become a research hotspot by data exchange among vehicles, infrastructure, and mobile devices. This study is focused on micro-modeling and quantitatively evaluating the impact of connected vehicle guidance on network-wide travel time by introducing various affecting factors. To evaluate the benefits of connected vehicle guidance, a simulation architecture based on one engine is proposed representing the connected vehicle–enabled virtual world, and connected vehicle route guidance scenario is established through the development of communication agent and intelligent transportation systems agents using connected vehicle application programming interface considering the communication properties, such as path loss and transmission power. The impact of connected vehicle guidance on network-wide travel time is analyzed by comparing with non-connected vehicle guidance in response to different market penetration rate, following rate, and congestion level. The simulation results explore that average network-wide travel time in connected vehicle guidance shows a significant reduction versus that in non–connected vehicle guidance. Average network-wide travel time in connected vehicle guidance have an increase of 42.23% comparing to that in non-connected vehicle guidance, and average travel time variability (represented by the coefficient of variance increases as the travel time increases. Other vital findings include that higher penetration rate and following rate generate bigger savings of average network-wide travel time. The savings of average network-wide travel time increase from 17% to 38% according to different congestion levels, and savings of average travel time in more serious congestion have a more obvious improvement for the same penetration rate or following rate.

  13. Determination of hydrologic properties needed to calculate average linear velocity and travel time of ground water in the principal aquifer underlying the southeastern part of Salt Lake Valley, Utah

    Science.gov (United States)

    Freethey, G.W.; Spangler, L.E.; Monheiser, W.J.

    1994-01-01

    A 48-square-mile area in the southeastern part of the Salt Lake Valley, Utah, was studied to determine if generalized information obtained from geologic maps, water-level maps, and drillers' logs could be used to estimate hydraulic conduc- tivity, porosity, and slope of the potentiometric surface: the three properties needed to calculate average linear velocity of ground water. Estimated values of these properties could be used by water- management and regulatory agencies to compute values of average linear velocity, which could be further used to estimate travel time of ground water along selected flow lines, and thus to determine wellhead protection areas around public- supply wells. The methods used to estimate the three properties are based on assumptions about the drillers' descriptions, the depositional history of the sediments, and the boundary con- ditions of the hydrologic system. These assump- tions were based on geologic and hydrologic infor- mation determined from previous investigations. The reliability of the estimated values for hydro- logic properties and average linear velocity depends on the accuracy of these assumptions. Hydraulic conductivity of the principal aquifer was estimated by calculating the thickness- weighted average of values assigned to different drillers' descriptions of material penetrated during the construction of 98 wells. Using these 98 control points, the study area was divided into zones representing approximate hydraulic- conductivity values of 20, 60, 100, 140, 180, 220, and 250 feet per day. This range of values is about the same range of values used in developing a ground-water flow model of the principal aquifer in the early 1980s. Porosity of the principal aquifer was estimated by compiling the range of porosity values determined or estimated during previous investigations of basin-fill sediments, and then using five different values ranging from 15 to 35 percent to delineate zones in the study area that were assumed to

  14. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  15. RSS SSM/I OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F15 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Weekly Average from DMSP F15 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  16. RSS SSMIS OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F16 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSMIS Ocean Product Grids 3-Day Average from DMSP F16 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  17. RSS SSM/I OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F10 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids Weekly Average from DMSP F10 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  18. RSS SSM/I OCEAN PRODUCT GRIDS WEEKLY AVERAGE FROM DMSP F8 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Products Grid Weekly Average from DMSP F8 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  19. Preoptometry and optometry school grade point average and optometry admissions test scores as predictors of performance on the national board of examiners in optometry part I (basic science) examination.

    Science.gov (United States)

    Bailey, J E; Yackle, K A; Yuen, M T; Voorhees, L I

    2000-04-01

    To evaluate preoptometry and optometry school grade point averages and Optometry Admission Test (OAT) scores as predictors of performance on the National Board of Examiners in Optometry NBEO Part I (Basic Science) (NBEOPI) examination. Simple and multiple correlation coefficients were computed from data obtained from a sample of three consecutive classes of optometry students (1995-1997; n = 278) at Southern California College of Optometry. The GPA after year two of optometry school was the highest correlation (r = 0.75) among all predictor variables; the average of all scores on the OAT was the highest correlation among preoptometry predictor variables (r = 0.46). Stepwise regression analysis indicated a combination of the optometry GPA, the OAT Academic Average, and the GPA in certain optometry curricular tracks resulted in an improved correlation (multiple r = 0.81). Predicted NBEOPI scores were computed from the regression equation and then analyzed by receiver operating characteristic (roc) and statistic of agreement (kappa) methods. From this analysis, we identified the predicted score that maximized identification of true and false NBEOPI failures (71% and 10%, respectively). Cross validation of this result on a separate class of optometry students resulted in a slightly lower correlation between actual and predicted NBEOPI scores (r = 0.77) but showed the criterion-predicted score to be somewhat lax. The optometry school GPA after 2 years is a reasonably good predictor of performance on the full NBEOPI examination, but the prediction is enhanced by adding the Academic Average OAT score. However, predicting performance in certain subject areas of the NBEOPI examination, for example Psychology and Ocular/Visual Biology, was rather insubstantial. Nevertheless, predicting NBEOPI performance from the best combination of year two optometry GPAs and preoptometry variables is better than has been shown in previous studies predicting optometry GPA from the best

  20. Establishment probability in newly founded populations

    Directory of Open Access Journals (Sweden)

    Gusset Markus

    2012-06-01

    Full Text Available Abstract Background Establishment success in newly founded populations relies on reaching the established phase, which is defined by characteristic fluctuations of the population’s state variables. Stochastic population models can be used to quantify the establishment probability of newly founded populations; however, so far no simple but robust method for doing so existed. To determine a critical initial number of individuals that need to be released to reach the established phase, we used a novel application of the “Wissel plot”, where –ln(1 – P0(t is plotted against time t. This plot is based on the equation P0t=1–c1e–ω1t, which relates the probability of extinction by time t, P0(t, to two constants: c1 describes the probability of a newly founded population to reach the established phase, whereas ω1 describes the population’s probability of extinction per short time interval once established. Results For illustration, we applied the method to a previously developed stochastic population model of the endangered African wild dog (Lycaon pictus. A newly founded population reaches the established phase if the intercept of the (extrapolated linear parts of the “Wissel plot” with the y-axis, which is –ln(c1, is negative. For wild dogs in our model, this is the case if a critical initial number of four packs, consisting of eight individuals each, are released. Conclusions The method we present to quantify the establishment probability of newly founded populations is generic and inferences thus are transferable to other systems across the field of conservation biology. In contrast to other methods, our approach disaggregates the components of a population’s viability by distinguishing establishment from persistence.

  1. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  2. Risk ratios for use in establishing dose limits for occupational exposure to radiation

    International Nuclear Information System (INIS)

    Metcalf, P.E.; Winkler, B.C.

    1980-01-01

    Dose limits for occupational exposure to radiation may be established by comparing the associated mortality risk with apparently accepted levels of industrial mortality risk due to conventional hazards. Average levels of industrial mortality risk rates are frequently quoted and used in such comparisons. However, within particular occupations or industries certain groups of workers will be exposed to higher levels of risk than the average, again an apparently accepted situation. A study has been made of the ratios of maximum to average industrial mortality risk currently experienced in some South African industries. Such a ratio may be used to assess the acceptability of maximum individual-to-average exposures in particular groups of exposed individuals. (author)

  3. Comments on the Joint Proposed Rulemaking to Establish Light-Duty Vehicle Greenhouse Gas Emission Standards and Corporate Average Fuel Economy Standards

    Energy Technology Data Exchange (ETDEWEB)

    Wenzel, Tom [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2009-10-27

    Tom Wenzel of Lawrence Berkeley National Laboratory comments on the joint rulemaking to establish greenhouse gas emission and fuel economy standards for light-duty vehicle, specifically on the relationship between vehicle weight and vehicle safety.

  4. Establishing model credibility involves more than validation

    International Nuclear Information System (INIS)

    Kirchner, T.

    1991-01-01

    One widely used definition of validation is that the quantitative test of the performance of a model through the comparison of model predictions to independent sets of observations from the system being simulated. The ability to show that the model predictions compare well with observations is often thought to be the most rigorous test that can be used to establish credibility for a model in the scientific community. However, such tests are only part of the process used to establish credibility, and in some cases may be either unnecessary or misleading. Naylor and Finger extended the concept of validation to include the establishment of validity for the postulates embodied in the model and the test of assumptions used to select postulates for the model. Validity of postulates is established through concurrence by experts in the field of study that the mathematical or conceptual model contains the structural components and mathematical relationships necessary to adequately represent the system with respect to the goals for the model. This extended definition of validation provides for consideration of the structure of the model, not just its performance, in establishing credibility. Evaluation of a simulation model should establish the correctness of the code and the efficacy of the model within its domain of applicability. (24 refs., 6 figs.)

  5. 78 FR 37104 - Establishment of Area Navigation (RNAV) Routes; Washington, DC

    Science.gov (United States)

    2013-06-20

    ...-0081; Airspace Docket No. 12-AEA-5] RIN 2120-AA66 Establishment of Area Navigation (RNAV) Routes... Register approves this incorporation by reference action under 1 CFR part 51, subject to the annual... establishing five RNAV routes in the Washington, DC area (78 FR 29615). Subsequent to publication, it was...

  6. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  7. Laser properties of an improved average-power Nd-doped phosphate glass

    International Nuclear Information System (INIS)

    Payne, S.A.; Marshall, C.D.; Bayramian, A.J.

    1995-01-01

    The Nd-doped phosphate laser glass described herein can withstand 2.3 times greater thermal loading without fracture, compared to APG-1 (commercially-available average-power glass from Schott Glass Technologies). The enhanced thermal loading capability is established on the basis of the intrinsic thermomechanical properties (expansion, conduction, fracture toughness, and Young's modulus), and by direct thermally-induced fracture experiments using Ar-ion laser heating of the samples. This Nd-doped phosphate glass (referred to as APG-t) is found to be characterized by a 29% lower gain cross section and a 25% longer low-concentration emission lifetime

  8. Working part-time: (not) a problem?

    NARCIS (Netherlands)

    Saskia Keuzenkamp; Carlien Hillebrink; Wil Portegijs; Babette Pouwels

    2009-01-01

    Original title: Deeltijd (g)een probleem. Three-quarters of working women in the Netherlands work part-time. More than half these women are in small part-time jobs (less than 25 hours per week). The government wants to raise the average working hours of women. A key question is then how much

  9. Establishment of the Integrated Plant Data Warehouse

    International Nuclear Information System (INIS)

    Oota, Yoshimi; Yoshinaga, Toshiaki

    1999-01-01

    This paper presents 'The Establishment of the Integrated Plant Data Warehouse and Verification Tests on Inter-corporate Electronic Commerce based on the Data Warehouse (PDWH)', one of the 'Shared Infrastructure for the Electronic Commerce Consolidation Project', promoted by the Ministry of International Trade and Industry (MITI) through the Information-Technology Promotion Agency (IPA), Japan. A study group called Japan Plant EC (PlantEC) was organized to perform relevant activities. One of the main activities of plantEC involves the construction of the Integrated (including manufacturers, engineering companies, plant construction companies, and machinery and parts manufacturers, etc.) Data Warehouse which is an essential part of the infrastructure necessary for a system to share information on industrial life cycle ranging from planning/designing to operation/maintenance. Another activity is the utilization of this warehouse for the purpose of conducting verification tests to prove its usefulness. Through these verification tests, PlantEC will endeavor to establish a warehouse with standardized data which can be used for the infrastructure of EC in the process plant industry. (author)

  10. Establishment of the Integrated Plant Data Warehouse

    Energy Technology Data Exchange (ETDEWEB)

    Oota, Yoshimi; Yoshinaga, Toshiaki [Hitachi Works, Hitachi Ltd., hitachi, Ibaraki (Japan)

    1999-07-01

    This paper presents 'The Establishment of the Integrated Plant Data Warehouse and Verification Tests on Inter-corporate Electronic Commerce based on the Data Warehouse (PDWH)', one of the 'Shared Infrastructure for the Electronic Commerce Consolidation Project', promoted by the Ministry of International Trade and Industry (MITI) through the Information-Technology Promotion Agency (IPA), Japan. A study group called Japan Plant EC (PlantEC) was organized to perform relevant activities. One of the main activities of plantEC involves the construction of the Integrated (including manufacturers, engineering companies, plant construction companies, and machinery and parts manufacturers, etc.) Data Warehouse which is an essential part of the infrastructure necessary for a system to share information on industrial life cycle ranging from planning/designing to operation/maintenance. Another activity is the utilization of this warehouse for the purpose of conducting verification tests to prove its usefulness. Through these verification tests, PlantEC will endeavor to establish a warehouse with standardized data which can be used for the infrastructure of EC in the process plant industry. (author)

  11. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  12. The consequences of time averaging for measuring temporal species turnover in the fossil record

    Science.gov (United States)

    Tomašových, Adam; Kidwell, Susan

    2010-05-01

    Modeling time averaging effects with simple simulations allows us to evaluate the magnitude of change in temporal species turnover that is expected to occur in long (paleoecological) time series with fossil assemblages. Distinguishing different modes of metacommunity dynamics (such as neutral, density-dependent, or trade-off dynamics) with time-averaged fossil assemblages requires scaling-up time-averaging effects because the decrease in temporal resolution and the decrease in temporal inter-sample separation (i.e., the two main effects of time averaging) substantially increase community stability relative to assemblages without or with weak time averaging. Large changes in temporal scale that cover centuries to millennia can lead to unprecedented effects on temporal rate of change in species composition. Temporal variation in species composition monotonically decreases with increasing duration of time-averaging in simulated fossil assemblages. Time averaging is also associated with the reduction of species dominance owing to the temporal switching in the identity of dominant species. High degrees of time averaging can cause that community parameters of local fossil assemblages converge to parameters of metacommunity rather that to parameters of individual local non-averaged communities. We find that the low variation in species composition observed among mollusk and ostracod subfossil assemblages can be explained by time averaging alone, and low temporal resolution and reduced temporal separation among assemblages in time series can thus explain a substantial part of the reduced variation in species composition relative to unscaled predictions of neutral model (i.e., species do not differ in birth, death, and immigration rates on per capita basis). The structure of time-averaged assemblages can thus provide important insights into processes that act over larger temporal scales, such as evolution of niches and dispersal, range-limit dynamics, taxon cycles, and

  13. RSS SSM/I OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F14 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids 3-Day Average from DMSP F14 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  14. RSS SSM/I OCEAN PRODUCT GRIDS 3-DAY AVERAGE FROM DMSP F10 NETCDF V7

    Data.gov (United States)

    National Aeronautics and Space Administration — The RSS SSM/I Ocean Product Grids 3-Day Average from DMSP F10 netCDF dataset is part of the collection of Special Sensor Microwave/Imager (SSM/I) and Special Sensor...

  15. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  16. 75 FR 41796 - National School Lunch, Special Milk, and School Breakfast Programs, National Average Payments...

    Science.gov (United States)

    2010-07-19

    ... National Average Payments for free, reduced price and paid afterschool snacks as part of the National...; Hawaii--free lunch-- 288 cents, reduced price lunch--248 cents. Afterschool Snacks in Afterschool Care Programs--The payments are: Contiguous States--free snack--74 cents, reduced price snack--37 cents, paid...

  17. 76 FR 43256 - National School Lunch, Special Milk, and School Breakfast Programs, National Average Payments...

    Science.gov (United States)

    2011-07-20

    ... National Average Payments for free, reduced price and paid afterschool snacks as part of the National...; Hawaii--free lunch-- 294 cents, reduced price lunch--254 cents. Afterschool Snacks in Afterschool Care Programs--The payments are: Contiguous States--free snack--76 cents, reduced price snack--38 cents, paid...

  18. Sensitivity analysis for matched pair analysis of binary data: From worst case to average case analysis.

    Science.gov (United States)

    Hasegawa, Raiden; Small, Dylan

    2017-12-01

    In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured confounder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects. © 2017, The International Biometric Society.

  19. Relating structural growth environment to white spruce sapling establishment at the Forest-Tundra Ecotone

    Science.gov (United States)

    Maguire, A.; Boelman, N.; Griffin, K. L.; Jensen, J.; Hiers, E.; Johnson, D. M.; Vierling, L. A.; Eitel, J.

    2017-12-01

    The effect of climate change on treeline position at the latitudinal Forest-Tundra ecotone (FTE) is poorly understood. While the FTE is expansive (stretching 13,000 km acros the panarctic), understanding relationships between climate and tree function may depend on very fine scale processes. High resolution tools are therefore needed to appropriately characterize the leading (northernmost) edge of the FTE. We hypothesized that microstructural metrics obtainable from lidar remote sensing may explain variation in the physical growth environment that governs sapling establishment. To test our hypothesis, we used terrestrial laser scanning (TLS) to collect highly spatially resolved 3-D structural information of white spruce (Picea glauca) saplings and their aboveground growth environment at the leading edge of a FTE in northern Alaska and Northwest Territories, Canada. Coordinates of sapling locations were extracted from the 3-D TLS data. Within each sampling plot, 20 sets of coordinates were randomly selected from regions where no saplings were present. Ground roughness, canopy roughness, average aspect, average slope, average curvature, wind shelter index, and wetness indexwere extracted from point clouds within a variable radius from all coordinates. Generalized linear models (GLM) were fit to determine which microstructural metrics were most strongly associated with sapling establishment. Preliminary analyses of three plots suggest that vegetation roughness, wetness index, ground roughness, and slope were the most important terrain metrics governing sapling presence (Figure 1). Comprehensive analyses will include eight plots and GLMs optimized for scale at which structural parameters affect sapling establishment. Spatial autocorrelation of sample locations will be accounted for in models. Because these analyses address how the physical growth environment affects sapling establishment, model outputs will provide information for improving understanding of the

  20. Analysis of photosynthate translocation velocity and measurement of weighted average velocity in transporting pathway of crops

    International Nuclear Information System (INIS)

    Ge Cailin; Luo Shishi; Gong Jian; Zhang Hao; Ma Fei

    1996-08-01

    The translocation profile pattern of 14 C-photosynthate along the transporting pathway in crops were monitored by pulse-labelling a mature leaf with 14 CO 2 . The progressive spreading of translocation profile pattern along the sheath or stem indicates that the translocation of photosynthate along the sheath or stem proceed with a range of velocities rather than with just a single velocity. The method for measuring the weighted average velocity of photosynthate translocation along the sheath or stem was established in living crops. The weighted average velocity and the maximum velocity of photosynthate translocation along the sheath in rice and maize were measured actually. (4 figs., 3 tabs.)

  1. Specification of optical components for a high average-power laser environment

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.R.; Chow, R.; Rinmdahl, K.A.; Willis, J.B.; Wong, J.N.

    1997-06-25

    Optical component specifications for the high-average-power lasers and transport system used in the Atomic Vapor Laser Isotope Separation (AVLIS) plant must address demanding system performance requirements. The need for high performance optics has to be balanced against the practical desire to reduce the supply risks of cost and schedule. This is addressed in optical system design, careful planning with the optical industry, demonstration of plant quality parts, qualification of optical suppliers and processes, comprehensive procedures for evaluation and test, and a plan for corrective action.

  2. 78 FR 66276 - Determination of Rates and Terms for Business Establishment Services

    Science.gov (United States)

    2013-11-05

    ...). The revisions read as follows: Sec. 384.4 Terms for making payment of royalty fees and statements of... LIBRARY OF CONGRESS Copyright Royalty Board 37 CFR Part 384 [Docket No. 2012-1 CRB Business Establishments II] Determination of Rates and Terms for Business Establishment Services AGENCY: Copyright Royalty...

  3. Determination of averaged axisymmetric flow surfaces according to results obtained by numerical simulation of flow in turbomachinery

    Directory of Open Access Journals (Sweden)

    Bogdanović-Jovanović Jasmina B.

    2012-01-01

    Full Text Available In the increasing need for energy saving worldwide, the designing process of turbomachinery, as an essential part of thermal and hydroenergy systems, goes in the direction of enlarging efficiency. Therefore, the optimization of turbomachinery designing strongly affects the energy efficiency of the entire system. In the designing process of turbomachinery blade profiling, the model of axisymmetric fluid flows is commonly used in technical practice, even though this model suits only the profile cascades with infinite number of infinitely thin blades. The actual flow in turbomachinery profile cascades is not axisymmetric, and it can be fictively derived into the axisymmetric flow by averaging flow parameters in the blade passages according to the circular coordinate. Using numerical simulations of flow in turbomachinery runners, its operating parameters can be preliminarily determined. Furthermore, using the numerically obtained flow parameters in the blade passages, averaged axisymmetric flow surfaces in blade profile cascades can also be determined. The method of determination of averaged flow parameters and averaged meridian streamlines is presented in this paper, using the integral continuity equation for averaged flow parameters. With thus obtained results, every designer can be able to compare the obtained averaged flow surfaces with axisymmetric flow surfaces, as well as the specific work of elementary stages, which are used in the procedure of blade designing. Numerical simulations of flow in an exemplary axial flow pump, used as a part of the thermal power plant cooling system, were performed using Ansys CFX. [Projekat Ministarstva nauke Republike Srbije, br. TR33040: Revitalization of existing and designing new micro and mini hydropower plants (from 100 kW to 1000 kW in the territory of South and Southeast Serbia

  4. Graduates of different UK medical schools show substantial differences in performance on MRCP(UK Part 1, Part 2 and PACES examinations

    Directory of Open Access Journals (Sweden)

    Mollon Jennifer

    2008-02-01

    Full Text Available Abstract Background The UK General Medical Council has emphasized the lack of evidence on whether graduates from different UK medical schools perform differently in their clinical careers. Here we assess the performance of UK graduates who have taken MRCP(UK Part 1 and Part 2, which are multiple-choice assessments, and PACES, an assessment using real and simulated patients of clinical examination skills and communication skills, and we explore the reasons for the differences between medical schools. Method We perform a retrospective analysis of the performance of 5827 doctors graduating in UK medical schools taking the Part 1, Part 2 or PACES for the first time between 2003/2 and 2005/3, and 22453 candidates taking Part 1 from 1989/1 to 2005/3. Results Graduates of UK medical schools performed differently in the MRCP(UK examination between 2003/2 and 2005/3. Part 1 and 2 performance of Oxford, Cambridge and Newcastle-upon-Tyne graduates was significantly better than average, and the performance of Liverpool, Dundee, Belfast and Aberdeen graduates was significantly worse than average. In the PACES (clinical examination, Oxford graduates performed significantly above average, and Dundee, Liverpool and London graduates significantly below average. About 60% of medical school variance was explained by differences in pre-admission qualifications, although the remaining variance was still significant, with graduates from Leicester, Oxford, Birmingham, Newcastle-upon-Tyne and London overperforming at Part 1, and graduates from Southampton, Dundee, Aberdeen, Liverpool and Belfast underperforming relative to pre-admission qualifications. The ranking of schools at Part 1 in 2003/2 to 2005/3 correlated 0.723, 0.654, 0.618 and 0.493 with performance in 1999–2001, 1996–1998, 1993–1995 and 1989–1992, respectively. Conclusion Candidates from different UK medical schools perform differently in all three parts of the MRCP(UK examination, with the

  5. Beyond Neighborhood Food Environments: Distance Traveled to Food Establishments in 5 US Cities, 2009-2011.

    Science.gov (United States)

    Liu, Jodi L; Han, Bing; Cohen, Deborah A

    2015-08-06

    Accurate conceptualizations of neighborhood environments are important in the design of policies and programs aiming to improve access to healthy food. Neighborhood environments are often defined by administrative units or buffers around points of interest. An individual may eat and shop for food within or outside these areas, which may not reflect accessibility of food establishments. This article examines the relevance of different definitions of food environments. We collected data on trips to food establishments using a 1-week food and travel diary and global positioning system devices. Spatial-temporal clustering methods were applied to identify homes and food establishments visited by study participants. We identified 513 visits to food establishments (sit-down restaurants, fast-food/convenience stores, malls or stores, groceries/supermarkets) by 135 participants in 5 US cities. The average distance between the food establishments and homes was 2.6 miles (standard deviation, 3.7 miles). Only 34% of the visited food establishments were within participants' neighborhood census tract. Buffers of 1 or 2 miles around the home covered 55% to 65% of visited food establishments. There was a significant difference in the mean distances to food establishments types (P = .008). On average, participants traveled the longest distances to restaurants and the shortest distances to groceries/supermarkets. Many definitions of the neighborhood food environment are misaligned with individual travel patterns, which may help explain the mixed findings in studies of neighborhood food environments. Neighborhood environments defined by actual travel activity may provide more insight on how the food environment influences dietary and food shopping choices.

  6. Longitudinal Patterns of Employment and Postsecondary Education for Adults with Autism and Average-Range IQ

    Science.gov (United States)

    Taylor, Julie Lounds; Henninger, Natalie A.; Mailick, Marsha R.

    2015-01-01

    This study examined correlates of participation in postsecondary education and employment over 12?years for 73 adults with autism spectrum disorders and average-range IQ whose families were part of a larger, longitudinal study. Correlates included demographic (sex, maternal education, paternal education), behavioral (activities of daily living,…

  7. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  8. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  9. Characterizing individual painDETECT symptoms by average pain severity

    Directory of Open Access Journals (Sweden)

    Sadosky A

    2016-07-01

    Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain

  10. 75 FR 14361 - Notification, Documentation, and Recordkeeping Requirements for Inspected Establishments

    Science.gov (United States)

    2010-03-25

    ... error. Table 3--Number of Establishments, and Total and Average Cost in Size (x $1,000) Recall Number of... Activities (2010) (2011) (2012) (2013) (2014) Total Very Small 2,856 Recall-Procedures 2,030 278 286 295 304... 461 Total All 6,300 Recall-Procedures 4,454 610 628 647 666 7,005 development & updating. Documenting...

  11. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  12. FPGA based computation of average neutron flux and e-folding period for start-up range of reactors

    International Nuclear Information System (INIS)

    Ram, Rajit; Borkar, S.P.; Dixit, M.Y.; Das, Debashis

    2013-01-01

    Pulse processing instrumentation channels used for reactor applications, play a vital role to ensure nuclear safety in startup range of reactor operation and also during fuel loading and first approach to criticality. These channels are intended for continuous run time computation of equivalent reactor core neutron flux and e-folding period. This paper focuses only the computational part of these instrumentation channels which is implemented in single FPGA using 32-bit floating point arithmetic engine. The computations of average count rate, log of average count rate, log rate and reactor period are done in VHDL using digital circuit realization approach. The computation of average count rate is done using fully adaptive window size moving average method, while Taylor series expansion for logarithms is implemented in FPGA to compute log of count rate, log rate and reactor e-folding period. This paper describes the block diagrams of digital logic realization in FPGA and advantage of fully adaptive window size moving average technique over conventional fixed size moving average technique for pulse processing of reactor instrumentations. (author)

  13. Examining Daily Electronic Cigarette Puff Topography Among Established and Non-established Cigarette Smokers in their Natural Environment.

    Science.gov (United States)

    Lee, Youn Ok; Nonnemaker, James M; Bradfield, Brian; Hensel, Edward C; Robinson, Risa J

    2017-10-04

    Understanding exposures and potential health effects of ecigarettes is complex. Users' puffing behavior, or topography, affects function of ecigarette devices (e.g., coil temperature) and composition of their emissions. Users with different topographies are likely exposed to different amounts of any harmful or potentially harmful constituents (HPHCs). In this study, we compare ecigarette topographies of established cigarette smokers and non-established cigarette smokers. Data measuring e-cigarette topography were collected using a wireless hand-held monitoring device in users' everyday lives over 1 week. Young adult (aged 18-25) participants (N=20) used disposable e-cigarettes with the monitor as they normally would and responded to online surveys. Topography characteristics of established versus non-established cigarette smokers were compared. On average, established cigarette smokers in the sample had larger first puff volume (130.9ml vs. 56.0ml, pvs. 651.7ml, pnon-established smokers. At marginal significance, they had longer sessions (566.3s vs. 279.7s, p=.06) and used e-cigarettes more sessions per day (5.3s vs. 3.5s, p=.14). Established cigarette smokers also used ecigarettes for longer puff durations (3.3s vs. 1.8s, pvs. 54.7ml, pnon-established smokers. At marginal significance, they had longer puff interval (38.1s vs. 21.7s, p=.05). Our results demonstrate that topography characteristics differ by level of current cigarette smoking. This suggests that exposures to constituents of e-cigarettes depends on user characteristics and that specific topography parameters may be needed for different user populations when assessing ecigarette health effects. A user's topography affects his or her exposure to HPHCs. As this study demonstrates, user characteristics, such as level of smoking, can influence topography. Thus, it is crucial to understand the topography profiles of different user types to assess the potential for population harm and to identify potentially

  14. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  15. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  16. 5 CFR 630.303 - Part-time employees; earnings.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Part-time employees; earnings. 630.303... AND LEAVE Annual Leave § 630.303 Part-time employees; earnings. A part-time employee for whom there... workweek, and a part-time employee on a flexible work schedule for whom there has been established only a...

  17. Campaign 1999-2001 of radon measurement in the establishments receiving public

    International Nuclear Information System (INIS)

    2001-11-01

    After some elements of context on the radon measurement in France, and a presentation of realised actions in 2001 by the Ministry in charge of health to manage the radon risk, this document exposes a synthesis in three parts on the situation of radon measurement campaigns in the establishments receiving public. The first part gives the methodology followed to make this state, the second part presents the synthetic results by department, and the last one the results at the regional level. (N.C.)

  18. A primal sub-gradient method for structured classification with the averaged sum loss

    Directory of Open Access Journals (Sweden)

    Mančev Dejan

    2014-12-01

    Full Text Available We present a primal sub-gradient method for structured SVM optimization defined with the averaged sum of hinge losses inside each example. Compared with the mini-batch version of the Pegasos algorithm for the structured case, which deals with a single structure from each of multiple examples, our algorithm considers multiple structures from a single example in one update. This approach should increase the amount of information learned from the example. We show that the proposed version with the averaged sum loss has at least the same guarantees in terms of the prediction loss as the stochastic version. Experiments are conducted on two sequence labeling problems, shallow parsing and part-of-speech tagging, and also include a comparison with other popular sequential structured learning algorithms.

  19. Effect of parameters in moving average method for event detection enhancement using phase sensitive OTDR

    Science.gov (United States)

    Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum

    2017-04-01

    We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.

  20. Establishment of an animal model of mice with radiation- injured soft tissue blood vessels

    International Nuclear Information System (INIS)

    Wang Daiyou; Yu Dahai; Wu Jiaxiao; Wei Shanliang; Wen Yuming

    2004-01-01

    Objective: The aim of this study was to establish an animal model of mice with radiation-injured soft tissue blood vessels. Methods: Forty male mice were irradiated with 30 Gy on the right leg. After the irradiation was finished each of the 40 male mice was tested with angiography, and its muscle tissues on the bilateral legs were examined with vessel staining assay and electron microscopy. Results: The results showed that the number of vessels on the right leg was less than that on the left leg, the microvessel density, average diameter and average sectional area of the right leg were all lower than those of the left, and the configuration and ultra-structure of vessels were also different between both sides of legs. Conclusion: In the study authors successfully established an animal model of mice with radiation-injured soft tissue blood vessels

  1. The value to blood establishments of supplier quality audit and of adopting a European Blood Alliance collaborative approach

    Science.gov (United States)

    Nightingale, Mark J.; Ceulemans, Jan; Ágoston, Stephanie; van Mourik, Peter; Marcou-Cherdel, Céline; Wickens, Betty; Johnstone, Pauline

    2014-01-01

    Background The assessment of suppliers of critical goods and services to European blood establishments is a regulatory requirement proving difficult to resource. This study was to establish whether European Blood Alliance member blood services could collaborate to reduce the cost of auditing suppliers without diminishing standards. Materials and method Five blood services took part, each contributing a maximum of one qualified auditor per audit (rather than the usual two). Four audits were completed involving eight auditors in total to a European Blood Alliance agreed policy and process using an audit scope agreed with suppliers. Results Audits produced a total of 22 observations, the majority relating to good manufacturing practice and highlighted deficiencies in processes, procedures and quality records including complaints’ handling, product recall, equipment calibration, management of change, facilities’ maintenance and monitoring and business continuity. Auditors reported that audits had been useful to their service and all audits prompted a positive response from suppliers with satisfactory corrective action plans where applicable. Audit costs totalled € 3,438 (average € 860 per audit) which is no more than equivalent traditional audits. The four audit reports have been shared amongst the five participating blood establishments and benefitted 13 recipient departments in total. Previously, 13 separate audits would have been required by the five blood services. Discussion Collaborative supplier audit has proven an effective and efficient initiative that can reduce the resource requirements of both suppliers and individual blood service’s auditing costs. Collaborative supplier audit has since been established within routine European Blood Alliance management practice. PMID:24553596

  2. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  3. Preliminary design of the BPM electronics memory scanner/dual boxcar averager for the Advanced Photon Source

    International Nuclear Information System (INIS)

    Votaw, A.J.

    1992-01-01

    The memory scanner/dual boxcar averager are VXI modules that are part of the Advanced Photon Source (APS) beam position monitor (BPM) data acquisition system. Each pair of modules is designed to gather and process digital data from up to nine digital channels transmitting the BPM data from the storage ring (360 locations) and the synchrotron (80 locations). They store beam history in a buffer, store the latest scan of all channels, and provide boxcar averaged X and Y position data for the global orbit feedback system, provide boxcar average X and Y position data for beam diagnostics, and a buffered output of SCDU data as it is scanned for the beam abort interlock system. The system's capability to support single pass, closed orbit and tune measurement functions will also be briefly described

  4. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  5. Time-averaged second-order pressure and velocity measurements in a pressurized oscillating flow prime mover

    Energy Technology Data Exchange (ETDEWEB)

    Paridaens, Richard [DynFluid, Arts et Metiers, 151 boulevard de l' Hopital, Paris (France); Kouidri, Smaine [LIMSI-CNRS, Orsay Cedex (France)

    2016-11-15

    Nonlinear phenomena in oscillating flow devices cause the appearance of a relatively minor secondary flow known as acoustic streaming, which is superimposed on the primary oscillating flow. Knowledge of control parameters, such as the time-averaged second-order velocity and pressure, would elucidate the non-linear phenomena responsible for this part of the decrease in the system's energetic efficiency. This paper focuses on the characterization of a travelling wave oscillating flow engine by measuring the time-averaged second order pressure and velocity. Laser Doppler velocimetry technique was used to measure the time-averaged second-order velocity. As streaming is a second-order phenomenon, its measurement requires specific settings especially in a pressurized device. Difficulties in obtaining the proper settings are highlighted in this study. The experiments were performed for mean pressures varying from 10 bars to 22 bars. Non-linear effect does not constantly increase with pressure.

  6. Evaluating Competitiveness of Faculties of Higher Educational Establishments in Slovakia

    Directory of Open Access Journals (Sweden)

    Rayevnyeva Olena V.

    2016-02-01

    Full Text Available The problem of competitiveness of higher education, efficiency of its functioning and training graduates of higher educational establishments according to the current and future needs of the market are among the key issues of socio-economic development strategy in EU countries. The aim of the study is to determine the competitiveness of faculties of major higher educational establishments based on the use of the cluster analysis and rating evaluations provided by national experts. The paper describes the methodology of rating evaluation of faculties of higher educational establishments in Slovakia on the basis of such components as: educational process; attractiveness of the program; science and research activities; doctoral studies; attracted grants. Shortcomings of the approach to faculty rating evaluations based on the averaged value have been determined. In order to improve analysis of the competitive positions of individual faculties of higher educational establishments in Slovakia, the cluster analysis was used and the results of breaking the faculties into five groups were presented. To forecast changes in the competitive positions of faculties of higher educational establishments in Slovakia, discriminant functions enabling to determine possible qualitative changes in the state of the faculties’ competitiveness due to external or internal factors have been built.

  7. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  8. Caregiver's Feeding Styles Questionnaire. Establishing cutoff points.

    Science.gov (United States)

    Hughes, Sheryl O; Cross, Matthew B; Hennessy, Erin; Tovar, Alison; Economos, Christina D; Power, Thomas G

    2012-02-01

    Researchers use the Caregiver's Feeding Styles Questionnaire (CFSQ) to categorize parent feeding into authoritative, authoritarian, indulgent, and uninvolved styles. The CFSQ assesses self-reported feeding and classifies parents using median splits which are used in a substantial body of parenting literature and allow for direct comparison across studies on dimensions of demandingness and responsiveness. No national norms currently exist for the CFSQ. This paper establishes and recommends cutoff points most relevant for low-income, minority US samples that researchers and clinicians can use to assign parents to feeding styles. Median scores for five studies are examined and the average across these studies reported. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. 48 CFR 12.000 - Scope of part.

    Science.gov (United States)

    2010-10-01

    ... (Public Law 103-355) by establishing acquisition policies more closely resembling those of the commercial... ACQUISITION OF COMMERCIAL ITEMS 12.000 Scope of part. This part prescribes policies and procedures unique to the acquisition of commercial items. It implements the Federal Government's preference for the...

  10. Principles to establish a culture of the radiological protection

    International Nuclear Information System (INIS)

    Tovar M, V. M.

    2013-10-01

    The term of Culture of the Radiological Protection means the way in which the radiological protection is founded, regulated, managed, preserved and perceived in the job places, with the use of the ionizing radiations, in the industry, in medicine and in any daily activity that reflects the activities, beliefs, perceptions, goals and values that all the involved parts concern in relation to the radiological protection. The principles to establish a culture of the radiological protection that should be established by the professionals of the radiological protection, following the recommendations of the International Radiological Protection Association (IRPA) are presented. (author)

  11. Museums and Philosophy--Of Art, and Many Other Things. Part 2.

    OpenAIRE

    Gaskell, Ivan

    2012-01-01

    This two-part article examines the very limited engagement by philosophers with museums, and proposes analysis under six headings: cultural variety, taxonomy, and epistemology in Part I, and teleology, ethics, and therapeutics and aesthetics in Part II. The article establishes that fundamental categories of museums established in the 19th century – of art, of anthropology, of history, of natural history, of science and technology – still persist. Among them, it distinguishes between hegemonic...

  12. The Dynamics of Gender Earnings Differentials: Evidence from Establishment Data

    OpenAIRE

    Barth, Erling; Pekkala Kerr, Sari; Olivetti, Claudia

    2017-01-01

    We use a unique match between the 2000 Decennial Census of the United States and the Longitudinal Employer Household Dynamics (LEHD) data to analyze how much of the increase in the gender earnings gap over the lifecycle comes from shifts in the sorting of men and women across high- and low-pay establishments and how much is due to differential earnings growth within establishments. We find that for the college educated the increase is substantial and, for the most part, due to differential ea...

  13. Five Strategies of Successful Part-Time Work.

    Science.gov (United States)

    Corwin, Vivien; Lawrence, Thomas B.; Frost, Peter J.

    2001-01-01

    Identifies commonalities in the approaches of successful part-time professionals. Discusses five strategies for success: (1) communicating work-life priorities and schedules to the organization; (2) making the business case for part-time arrangements; (3) establishing time management routines; (4) cultivating advocates in senior management; and…

  14. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  15. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  16. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  17. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  18. A Novel Busbar Protection Based on the Average Product of Fault Components

    Directory of Open Access Journals (Sweden)

    Guibin Zou

    2018-05-01

    Full Text Available This paper proposes an original busbar protection method, based on the characteristics of the fault components. The method firstly extracts the fault components of the current and voltage after the occurrence of a fault, secondly it uses a novel phase-mode transformation array to obtain the aerial mode components, and lastly, it obtains the sign of the average product of the aerial mode voltage and current. For a fault on the busbar, the average products that are detected on all of the lines that are linked to the faulted busbar are all positive within a specific duration of the post-fault. However, for a fault at any one of these lines, the average product that has been detected on the faulted line is negative, while those on the non-faulted lines are positive. On the basis of the characteristic difference that is mentioned above, the identification criterion of the fault direction is established. Through comparing the fault directions on all of the lines, the busbar protection can quickly discriminate between an internal fault and an external fault. By utilizing the PSCAD/EMTDC software (4.6.0.0, Manitoba HVDC Research Centre, Winnipeg, MB, Canada, a typical 500 kV busbar model, with one and a half circuit breakers configuration, was constructed. The simulation results show that the proposed busbar protection has a good adjustability, high reliability, and rapid operation speed.

  19. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  20. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  1. Fourwing saltbrush establishment in the Keating Uniform Shrub Garden—first year results.

    Science.gov (United States)

    J. Michael Geist; Paul J. Edgerton

    1984-01-01

    Site preparation techniques to aid establishment of fourwing saltbush (Atriplex canescens) were compared at a test location in eastern Oregon. Survival and growth of transplanted seedlings were improved after one season of growth by either spot spraying with herbicides or scalping to reduce competing vegetation. Average growth of seedlings was...

  2. Averaged multivalued solutions and time discretization for conservation laws

    International Nuclear Information System (INIS)

    Brenier, Y.

    1985-01-01

    It is noted that the correct shock solutions can be approximated by averaging in some sense the multivalued solution given by the method of characteristics for the nonlinear scalar conservation law (NSCL). A time discretization for the NSCL equation based on this principle is considered. An equivalent analytical formulation is shown to lead quite easily to a convergence result, and a third formulation is introduced which can be generalized for the systems of conservation laws. Various numerical schemes are constructed from the proposed time discretization. The first family of schemes is obtained by using a spatial grid and projecting the results of the time discretization. Many known schemes are then recognized (mainly schemes by Osher, Roe, and LeVeque). A second way to discretize leads to a particle scheme without space grid, which is very efficient (at least in the scalar case). Finally, a close relationship between the proposed method and the Boltzmann type schemes is established. 14 references

  3. 2 CFR 175.5 - Purpose of this part.

    Science.gov (United States)

    2010-01-01

    ... Reserved AWARD TERM FOR TRAFFICKING IN PERSONS § 175.5 Purpose of this part. This part establishes a Governmentwide award term for grants and cooperative agreements to implement the requirement in paragraph (g) of... 2 Grants and Agreements 1 2010-01-01 2010-01-01 false Purpose of this part. 175.5 Section 175.5...

  4. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  5. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  6. Establishing QC/QA system in the fabrication of nuclear fuel assemblies

    International Nuclear Information System (INIS)

    Suh, K.S.; Choi, S.K.; Park, H.G.; Park, T.G.; Chung, J.S.

    1980-01-01

    Quality control instruction manuals and inspection methods for UO 2 powder and zircaloy materials as the material control, and for UO 2 pellets and nuclear fuel rods as the process control were established. And for the establishment of Q.A programme, the technical specifications of the purchased materials, the control regulation of the measuring and testing equipments, and traceability chart as a part of document control have also been provided and practically applied to the fuel fabrication process

  7. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  8. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  9. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  10. 33 CFR Appendix E to Part 157 - Specifications for the Design, Installation and Operation of a Part Flow System for Control of...

    Science.gov (United States)

    2010-07-01

    ... established by the Administration and which shall at least contain all the provisions of these Specifications. 2.2The part flow concept is based on the principle that the observation of a representative part... arrangements acceptable to the Administration. 3.3The display of the part flow shall be arranged in a sheltered...

  11. The egg consumption of the average household in Italy.

    Science.gov (United States)

    Prencipe, Vincenza; Rizzi, Valentina; Giovannini, Armando; Migliorati, Giacomo

    2010-01-01

    A survey was conducted over a one-year period by means of telephone interviews with 7 991 Italian households to establish the domestic consumption of eggs, the distribution by source of supply, seasonal variations and storage and preparation methods used. Eggs are mainly purchased from large retailers (53%), followed by small retailers (25.2%), direct purchase from producers (16%), and local or itinerant markets (5.8%). It was found that 69.9% of households buy packaged eggs; 92% of households store them in the refrigerator, although this percentage varies considerably, according to the type of presentation (packaged or loose) and the number of eggs bought. Italian households mainly eat eggs cooked (48.9%), followed by partly cooked (35.0%) and raw (16.1%).

  12. The egg consumption of the average household in Italy

    Directory of Open Access Journals (Sweden)

    Giacomo Migliorati

    2010-09-01

    Full Text Available A survey was conducted over a one-year period by means of telephone interviews with 7 991 Italian households to establish the domestic consumption of eggs, the distribution by source of supply, seasonal variations and storage and preparation methods used. Eggs are mainly purchased from large retailers (53%, followed by small retailers (25.2%, direct purchase from producers (16%, and local or itinerant markets (5.8%. It was found that 69.9% of households buy packaged eggs; 92% of households store them in the refrigerator, although this percentage varies considerably, according to the type of presentation (packaged or loose and the number of eggs bought. Italian households mainly eat eggs cooked (48.9%, followed by partly cooked (35.0% and raw (16.1%.

  13. Maintaining Healthy Skin -- Part 2

    Science.gov (United States)

    ... and SCI • Depression and SCI • Taking Care of Pressure Sores • Maintaining Healthy Skin (Part I) • Maintaining Healthy Skin ( ... For information on establishing skin tolerance, see our “Pressure Sores” pamphlet.) Pressure releases in a wheelchair can be ...

  14. Average and dispersion of the luminosity-redshift relation in the concordance model

    Energy Technology Data Exchange (ETDEWEB)

    Ben-Dayan, I. [DESY Hamburg (Germany). Theory Group; Gasperini, M. [Bari Univ. (Italy). Dipt. di Fisica; Istituto Nazionale di Fisica Nucleare, Bari (Italy); Marozzi, G. [College de France, 75 - Paris (France); Geneve Univ. (Switzerland). Dept. de Physique Theorique and CAP; Nugier, F. [Ecole Normale Superieure CNRS, Paris (France). Laboratoire de Physique Theorique; Veneziano, G. [College de France, 75 - Paris (France); CERN, Geneva (Switzerland). Physics Dept.; New York Univ., NY (United States). Dept. of Physics

    2013-03-15

    Starting from the luminosity-redshift relation recently given up to second order in the Poisson gauge, we calculate the effects of the realistic stochastic background of perturbations of the so-called concordance model on the combined light-cone and ensemble average of various functions of the luminosity distance, and on their variance, as functions of redshift. We apply a gauge-invariant light-cone averaging prescription which is free from infrared and ultraviolet divergences, making our results robust with respect to changes of the corresponding cutoffs. Our main conclusions, in part already anticipated in a recent letter for the case of a perturbation spectrum computed in the linear regime, are that such inhomogeneities not only cannot avoid the need for dark energy, but also cannot prevent, in principle, the determination of its parameters down to an accuracy of order 10{sup -3} - 10{sup -5}, depending on the averaged observable and on the regime considered for the power spectrum. However, taking into account the appropriate corrections arising in the non-linear regime, we predict an irreducible scatter of the data approaching the 10% level which, for limited statistics, will necessarily limit the attainable precision. The predicted dispersion appears to be in good agreement with current observational estimates of the distance-modulus variance due to Doppler and lensing effects (at low and high redshifts, respectively), and represents a challenge for future precision measurements.

  15. The Impact of the Location on the Price Offered by Accommodation Establishments in Urban Areas

    Directory of Open Access Journals (Sweden)

    Roman Švec

    2014-01-01

    Full Text Available The aim of this paper is to assess the relationship between the price of accommodation and the urban space and character of the place. The price and spatial connections (together with the quality of the provided services become an important motive for clients when choosing a concrete accommodation establishment. As the competition is very difficult in the field of accommodation establishments and the supply multiply surpasses the demand, there is an intensive search for miscellaneous strategies of engaging in the competition. The built-up territory of the town České Budějovice was chosen as a model territory. The prices of the summer season 2012 have been entered into the analysis. The impact of the location was assessed on the level of the type of land-use. The distribution of the accommodation establishments in the studied is highly uneven, without any more significant tendency to the creation of the spatial clusters. It is fundamentally influenced especially by the distance of the historical center. A price map was formed identifying the zones with the above-average prices, as well as the zones with the highly below-average prices.

  16. Self-similarity of higher-order moving averages

    Science.gov (United States)

    Arianos, Sergio; Carbone, Anna; Türk, Christian

    2011-10-01

    In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).

  17. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  18. Part 1: Leadership's Influence on Corporate Entrepreneurship in Indian Textile Firms Part 2: Business Plan: Karma Ltd.

    OpenAIRE

    Muthe, Anshul

    2011-01-01

    As part of my dissertation, i created a business plan for an Indian Textile company to establish its own brand in the Indian market. The current booming economy and increase in purchasing power provides an ideal opportunity for Karma ltd. to establish itself. Second i focused on leadership styles and its influence on corporporate entrepreneuership. The research was based in India and provided great insights on leadership and corporate entreprenuership constructs.

  19. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  20. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  1. Establishing an eastern broccoli industry: where are we and where do we need to go?

    Science.gov (United States)

    A Coordinated Agricultural Project (CAP) entitled “Establishing an Eastern Broccoli Industry” has been underway since the fall of 2010 and funded under the USDA Specialty Crop Research Initiative (SCRI), which was established as part of the 2008 Farm Bill. This project has brought together research...

  2. 75 FR 65585 - Proposed Establishment of Class E Airspace; Wolfeboro, NH

    Science.gov (United States)

    2010-10-26

    ... DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 71 [Docket No. FAA-2010...: Federal Aviation Administration (FAA), DOT. ACTION: Notice of proposed rulemaking (NPRM), withdrawal... to establish Class E airspace at Huggins Hospital, Wolfeboro, NH. The NPRM is being withdrawn as a...

  3. Establishment of Trees for Sand Settlement in a Completely Desertified Environment

    NARCIS (Netherlands)

    Nasr Al-amin, N.K.; Stigter, C.J.; El-Tayeb Mohammed, A.

    2006-01-01

    In a desertified source area of drifting sand encroaching on parts of the Gezira Irrigation Scheme in Central Sudan, sand settlement by trees was envisaged. Establishment, survival and growth of Acacia tortilis, Leptadenia pyrotechnica, Prosopis juliflora, Salvadora persica and Panicum turgidum were

  4. [Establishment of mouse endometrial injury model by electrocoagulation].

    Science.gov (United States)

    Hu, Xiaoxiao; Lin, Xiaona; Jiang, Yinshen; Shi, Libing; Wang, Jieyu; Zhao, Lijuan; Zhang, Songying

    2014-12-23

    To establish the murine model of moderate endometrial injury. Electrocoagulation was applied to induce endometrial injury of ICR mice with 0.5 watts power while contralateral uterine cavity acted as control without electrocoagulation. The endometrial histomorphology was observed in 7 days later by microscopy and fetal number of each lateral uterus assessed at 17.5 days after pregnancy. At 7 days post-electrocoagulation, the average endometrial thickness of operating side was significantly thinner than that of control side (1.14 ± 0.08 vs 1.88 ± 0.15 mm, P electrocoagulation injury shows morphologic changes and decreased fertile ability. It has potential uses for animal studies of endometrial injury treatment.

  5. Test of Axel-Brink predictions by a discrete approach to resonance-averaged (n,γ) spectroscopy

    International Nuclear Information System (INIS)

    Raman, S.; Shahal, O.; Slaughter, G.G.

    1981-01-01

    The limitations imposed by Porter-Thomas fluctuations in the study of primary γ rays following neutron capture have been partly overcome by obtaining individual γ-ray spectra from 48 resonances in the 173 Yb(n,γ) reaction and summing them after appropriate normalizations. The resulting average radiation widths (and hence the γ-ray strength function) are in good agreement with the Axel-Brink predictions based on a giant dipole resonance model

  6. Population-averaged macaque brain atlas with high-resolution ex vivo DTI integrated into in vivo space.

    Science.gov (United States)

    Feng, Lei; Jeon, Tina; Yu, Qiaowen; Ouyang, Minhui; Peng, Qinmu; Mishra, Virendra; Pletikos, Mihovil; Sestan, Nenad; Miller, Michael I; Mori, Susumu; Hsiao, Steven; Liu, Shuwei; Huang, Hao

    2017-12-01

    Animal models of the rhesus macaque (Macaca mulatta), the most widely used nonhuman primate, have been irreplaceable in neurobiological studies. However, a population-averaged macaque brain diffusion tensor imaging (DTI) atlas, including comprehensive gray and white matter labeling as well as bony and facial landmarks guiding invasive experimental procedures, is not available. The macaque white matter tract pathways and microstructures have been rarely recorded. Here, we established a population-averaged macaque brain atlas with high-resolution ex vivo DTI integrated into in vivo space incorporating bony and facial landmarks, and delineated microstructures and three-dimensional pathways of major white matter tracts in vivo MRI/DTI and ex vivo (postmortem) DTI of ten rhesus macaque brains were acquired. Single-subject macaque brain DTI template was obtained by transforming the postmortem high-resolution DTI data into in vivo space. Ex vivo DTI of ten macaque brains was then averaged in the in vivo single-subject template space to generate population-averaged macaque brain DTI atlas. The white matter tracts were traced with DTI-based tractography. One hundred and eighteen neural structures including all cortical gyri, white matter tracts and subcortical nuclei, were labeled manually on population-averaged DTI-derived maps. The in vivo microstructural metrics of fractional anisotropy, axial, radial and mean diffusivity of the traced white matter tracts were measured. Population-averaged digital atlas integrated into in vivo space can be used to label the experimental macaque brain automatically. Bony and facial landmarks will be available for guiding invasive procedures. The DTI metric measurements offer unique insights into heterogeneous microstructural profiles of different white matter tracts.

  7. Part two

    DEFF Research Database (Denmark)

    Nielsen, Mads Pagh; Kær, Søren Knudsen; Korsgaard, Anders

    2008-01-01

    A novel micro combined heat and power system and a dynamic model thereof were presented in part one of the publication. In the following, the control system and dynamic performance of the system are presented. The model is subjected to a measured consumption pattern of 25 Danish single family...... houses with measurements of heat, power and hot water consumption every 15th minute during one year. Three scenarios are analyzed ranging from heat following only (grid compensation for electricity) to heat and power following with net export of electricity during high and peak load hours. Average...

  8. Estimating Gestational Age With Sonography: Regression-Derived Formula Versus the Fetal Biometric Average.

    Science.gov (United States)

    Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John

    2018-03-01

    To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later.

  9. A new type of exact arbitrarily inhomogeneous cosmology: evolution of deceleration in the flat homogeneous-on-average case

    Energy Technology Data Exchange (ETDEWEB)

    Hellaby, Charles, E-mail: Charles.Hellaby@uct.ac.za [Dept. of Maths. and Applied Maths, University of Cape Town, Rondebosch, 7701 (South Africa)

    2012-01-01

    A new method for constructing exact inhomogeneous universes is presented, that allows variation in 3 dimensions. The resulting spacetime may be statistically uniform on average, or have random, non-repeating variation. The construction utilises the Darmois junction conditions to join many different component spacetime regions. In the initial simple example given, the component parts are spatially flat and uniform, but much more general combinations should be possible. Further inhomogeneity may be added via swiss cheese vacuoles and inhomogeneous metrics. This model is used to explore the proposal, that observers are located in bound, non-expanding regions, while the universe is actually in the process of becoming void dominated, and thus its average expansion rate is increasing. The model confirms qualitatively that the faster expanding components come to dominate the average, and that inhomogeneity results in average parameters which evolve differently from those of any one component, but more realistic modelling of the effect will need this construction to be generalised.

  10. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  11. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  12. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  13. Establishing a national research center on day care

    DEFF Research Database (Denmark)

    Ellegaard, Tomas

    The paper presents and discusses the current formation of a national research center on ECEC. The center is currently being established. It is partly funded by the Danish union of early childhood and youth educators. It is based on cooperation between a number of Danish universities and this nati...... current new public management policies. However there is also more conflicting issues that emerge in this enterprise – especially on interests, practice relevance and knowledge paradigms....

  14. Establishment for Nuclear Equipment -Overview

    International Nuclear Information System (INIS)

    Pracz, J.

    2006-01-01

    Research and development works conducted in the Establishment for Nuclear Equipment (ZdAJ) were focused around 3 subject areas: an accelerator for cancer treatment, therapeutical tables, systems and methods for controlling objects that cross international borders. The new, medium energy accelerator for cancer therapy cases is being designed in the Establishment for several years. In 2005 progress was achieved. A physical part, containing an electron beam has been completed and the parameters of that beam make it useful for therapeutical purposes. Consequently, the work on designing and testing of beam control systems, ensuring its high stability, repetition of irradiation parameters and accuracy of dosage have been started. Results of these tests make it very probable that 2006 will be the final year of scientific works and in 2007 the new apparatus will be ready for sales. Therapeutical tables have become a leading product of ZdAJ IPJ. Their technical parameters, reliability and universality in uses are appreciated by many customers of ZdAJ. In 2005, the table Polkam 16 was registered by the national Office for Registration of Medical Equipment as the first product of ZdAJ that meets all technical and formal requirements of the safety mark CE. This allows sales of the product on the market of the European Union. The research and development part of designing a therapeutical table for uses in the total body irradiation technique was also concluded in 2005. After the September 11 terrorist attacks on WTC a matter of controlling international borders have become a priority for many countries. In 2005 in ZdAJ IPJ, we conducted many preliminary calculations and experiments analyzing systems of irradiation sources, both photon and neutron as well as systems of detection and designing of signals triggered by controlling objects crossing the border. The results so far have enabled us to formulate a research project which has been positively evaluated by experts and found

  15. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  16. Small river plumes off the northeastern coast of the Black Sea under average climatic and flooding discharge conditions

    Science.gov (United States)

    Osadchiev, Alexander; Korshenko, Evgeniya

    2017-06-01

    This study focuses on the impact of discharges of small rivers on the delivery and fate of fluvial water and suspended matter at the northeastern part of the Black Sea under different local precipitation conditions. Several dozens of mountainous rivers flow into the sea at the study region, and most of them, except for several of the largest, have little annual runoff and affect adjacent coastal waters to a limited extent under average climatic conditions. However, the discharges of these small rivers are characterized by a quick response to precipitation events and can significantly increase during and shortly after heavy rains, which are frequent in the considered area. The delivery and fate of fluvial water and terrigenous sediments at the study region, under average climatic and rain-induced flooding conditions, were explored and compared using in situ data, satellite imagery, and numerical modeling. It was shown that the point-source spread of continental discharge dominated by several large rivers under average climatic conditions can change to the line-source discharge from numerous small rivers situated along the coast in response to heavy rains. The intense line-source runoff of water and suspended sediments forms a geostrophic alongshore current of turbid and freshened water, which induces the intense transport of suspended and dissolved constituents discharged with river waters in a northwestern direction. This process significantly influences water quality and causes active sediment load at large segments of the narrow shelf at the northeastern part of the Black Sea compared to average climatic discharge conditions.

  17. Small river plumes off the northeastern coast of the Black Sea under average climatic and flooding discharge conditions

    Directory of Open Access Journals (Sweden)

    A. Osadchiev

    2017-06-01

    Full Text Available This study focuses on the impact of discharges of small rivers on the delivery and fate of fluvial water and suspended matter at the northeastern part of the Black Sea under different local precipitation conditions. Several dozens of mountainous rivers flow into the sea at the study region, and most of them, except for several of the largest, have little annual runoff and affect adjacent coastal waters to a limited extent under average climatic conditions. However, the discharges of these small rivers are characterized by a quick response to precipitation events and can significantly increase during and shortly after heavy rains, which are frequent in the considered area. The delivery and fate of fluvial water and terrigenous sediments at the study region, under average climatic and rain-induced flooding conditions, were explored and compared using in situ data, satellite imagery, and numerical modeling. It was shown that the point-source spread of continental discharge dominated by several large rivers under average climatic conditions can change to the line-source discharge from numerous small rivers situated along the coast in response to heavy rains. The intense line-source runoff of water and suspended sediments forms a geostrophic alongshore current of turbid and freshened water, which induces the intense transport of suspended and dissolved constituents discharged with river waters in a northwestern direction. This process significantly influences water quality and causes active sediment load at large segments of the narrow shelf at the northeastern part of the Black Sea compared to average climatic discharge conditions.

  18. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  19. Socio-economic impact in a region in the southern part of Jutland by the establishment of a plant for processing of bio ethanol

    Energy Technology Data Exchange (ETDEWEB)

    Joergensen, Henning; Hjort-Gregersen, K.

    2005-09-15

    The Farmers Association of Southern Jutland took an interest in the establishment of a plant for processing of Ethanol primarily due to the wish to contribute to the business development in the western part of Southern Jutland. A large plant for production of bio ethanol will bring along a significant number of local jobs with positive derived economic effects in the local community. Further the plant will also form the basis for a new possibility of marketing of cereal crops. From asocial point of view the request to produce ethanol and other biomass based propellants are motivated by the international obligation to reduce emission of greenhouse gasses, which primarily originate from energy production from conventional fossil fuels. A certain amount of fossil fuels is required in the production of crops, but it has been estimated that the net emission of CO{sub 2} by production of ethanol only constitutes 10% of the emission by fossil energy. (au)

  20. Demonstration of two-phase Direct Numerical Simulation (DNS) methods potentiality to give information to averaged models: application to bubbles column

    International Nuclear Information System (INIS)

    Magdeleine, S.

    2009-11-01

    This work is a part of a long term project that aims at using two-phase Direct Numerical Simulation (DNS) in order to give information to averaged models. For now, it is limited to isothermal bubbly flows with no phase change. It could be subdivided in two parts: Firstly, theoretical developments are made in order to build an equivalent of Large Eddy Simulation (LES) for two phase flows called Interfaces and Sub-grid Scales (ISS). After the implementation of the ISS model in our code called Trio U , a set of various cases is used to validate this model. Then, special test are made in order to optimize the model for our particular bubbly flows. Thus we showed the capacity of the ISS model to produce a cheap pertinent solution. Secondly, we use the ISS model to perform simulations of bubbly flows in column. Results of these simulations are averaged to obtain quantities that appear in mass, momentum and interfacial area density balances. Thus, we processed to an a priori test of a complete one dimensional averaged model.We showed that this model predicts well the simplest flows (laminar and monodisperse). Moreover, the hypothesis of one pressure, which is often made in averaged model like CATHARE, NEPTUNE and RELAP5, is satisfied in such flows. At the opposite, without a polydisperse model, the drag is over-predicted and the uncorrelated A i flux needs a closure law. Finally, we showed that in turbulent flows, fluctuations of velocity and pressure in the liquid phase are not represented by the tested averaged model. (author)

  1. Average and local structure of α-CuI by configurational averaging

    International Nuclear Information System (INIS)

    Mohn, Chris E; Stoelen, Svein

    2007-01-01

    Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs

  2. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  3. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  4. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  5. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...

  6. Working part-time: (not) a problem?

    OpenAIRE

    Saskia Keuzenkamp; Carlien Hillebrink; Wil Portegijs; Babette Pouwels

    2009-01-01

    Original title: Deeltijd (g)een probleem. Three-quarters of working women in the Netherlands work part-time. More than half these women are in small part-time jobs (less than 25 hours per week). The government wants to raise the average working hours of women. A key question is then how much scope there is for women to increase their working hours. This report explores this issue from three angles. First it looks at the role played by employers in increasing the working hours of women and at ...

  7. FEATURES CONCERNING THE ESTABLISHMENT OF AUTHORIZED INDIVIDUAL AND FAMILY FIRMS

    Directory of Open Access Journals (Sweden)

    CLAUDIA ISAC

    2014-12-01

    Full Text Available This paper presents recent legislative changes relating to the establishment and organization of small firms as: The individual firm, the family firm, Authorized individuals (PFA. Thus, in the first part of the paper I present the main features and advantages of the three types of firms, and a comparison between them. The paper continues with the necessary documents for setting up the companies and highlights their role in economic advances. In the second part of the paper, I did a statistical analysis of the evolution of the number of firms of this type and the sectors in which they operate.

  8. Continent-wide risk assessment for the establishment of nonindigenous species in Antarctica

    Science.gov (United States)

    Chown, Steven L.; Huiskes, Ad H. L.; Gremmen, Niek J. M.; Lee, Jennifer E.; Terauds, Aleks; Crosbie, Kim; Frenot, Yves; Hughes, Kevin A.; Imura, Satoshi; Kiefer, Kate; Lebouvier, Marc; Raymond, Ben; Tsujimoto, Megumu; Ware, Chris; Van de Vijver, Bart; Bergstrom, Dana Michelle

    2012-01-01

    Invasive alien species are among the primary causes of biodiversity change globally, with the risks thereof broadly understood for most regions of the world. They are similarly thought to be among the most significant conservation threats to Antarctica, especially as climate change proceeds in the region. However, no comprehensive, continent-wide evaluation of the risks to Antarctica posed by such species has been undertaken. Here we do so by sampling, identifying, and mapping the vascular plant propagules carried by all categories of visitors to Antarctica during the International Polar Year's first season (2007–2008) and assessing propagule establishment likelihood based on their identity and origins and on spatial variation in Antarctica's climate. For an evaluation of the situation in 2100, we use modeled climates based on the Intergovernmental Panel on Climate Change's Special Report on Emissions Scenarios Scenario A1B [Nakićenović N, Swart R, eds (2000) Special Report on Emissions Scenarios: A Special Report of Working Group III of the Intergovernmental Panel on Climate Change (Cambridge University Press, Cambridge, UK)]. Visitors carrying seeds average 9.5 seeds per person, although as vectors, scientists carry greater propagule loads than tourists. Annual tourist numbers (∼33,054) are higher than those of scientists (∼7,085), thus tempering these differences in propagule load. Alien species establishment is currently most likely for the Western Antarctic Peninsula. Recent founder populations of several alien species in this area corroborate these findings. With climate change, risks will grow in the Antarctic Peninsula, Ross Sea, and East Antarctic coastal regions. Our evidence-based assessment demonstrates which parts of Antarctica are at growing risk from alien species that may become invasive and provides the means to mitigate this threat now and into the future as the continent's climate changes. PMID:22393003

  9. Light-cone averaging in cosmology: formalism and applications

    International Nuclear Information System (INIS)

    Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.

    2011-01-01

    We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe

  10. VULNERABILITY OF PART TIME EMPLOYEES

    Directory of Open Access Journals (Sweden)

    Raluca Dimitriu

    2015-11-01

    Full Text Available The employee who concluded a part-time contract is the employee whose normal working hours, calculated weekly or as monthly average, is lower than the number of normal working hours of a comparable full-time employee. Part-time workers generally have the same legal status as full time workers. In fact, the vulnerability of this category of workers is not necessarily legal but rather economic: income - in proportion to the work performed, may be insufficient to cover the needs of living. However, such vulnerability may also have a certain cultural component: in some societies, professional identity is determined by the length of working hours. Also, part time work may hide many types of indirect discrimination.As a result, the part-time contract requires more than a protective legislation: it requires a strategy. This paper proposes a number of milestones of such a strategy, as well as some concrete de lege ferenda proposals.

  11. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  12. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  13. Basic Energy Conservation and Management--Part 2: HVAC

    Science.gov (United States)

    Krueger, Glenn

    2012-01-01

    Reducing school district energy expenditures has become a universal goal, and new technologies have brought greater energy efficiencies to the school environment. In Part 1 of this two-part series, the author discussed the steps required to establish an energy conservation and management program with an emphasis on lighting. In this article, he…

  14. Variation in Part-Time Work among Pediatric Subspecialties.

    Science.gov (United States)

    Freed, Gary L; Boyer, Debra M; Van, Kenton D; Macy, Michelle L; McCormick, Julie; Leslie, Laurel K

    2018-04-01

    To assess the part-time workforce and average hours worked per week among pediatric subspecialists in the 15 medical subspecialties certified by the American Board of Pediatrics. We examined data from pediatric subspecialists who enrolled in Maintenance of Certification with the American Board of Pediatrics from 2009 to 2015. Data were collected via an online survey. Providers indicated whether they worked full time or part time and estimated the average number of hours worked per week in clinical, research, education, and administrative tasks, excluding time on call. We calculated and compared the range of hours worked by those in full- and part-time positions overall, by demographic characteristics, and by subspecialty. Overall, 9.6% of subspecialists worked part time. There was significant variation in part-time employment rates between subspecialties, ranging from 3.8% among critical care pediatricians to 22.9% among developmental-behavioral pediatricians. Women, American medical graduates, and physicians older than 70 years of age reported higher rates of part-time employment than men, international medical graduates, and younger physicians. There was marked variation in the number of hours worked across subspecialties. Most, but not all, full-time subspecialists reported working at least 40 hours per week. More than one-half of physicians working part time in hematology and oncology, pulmonology, and transplant hepatology reported working at least 40 hours per week. There are unique patterns of part-time employment and hours worked per week among pediatric medical subspecialists that make simple head counts inadequate to determine the effective workforce. Our findings are limited to the 15 American Board of Pediatrics-certified medical subspecialties. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Mass determination of moment magnitudes M w and establishing the relationship between M w and M L for moderate and small Kamchatka earthquakes

    Science.gov (United States)

    Abubakirov, I. R.; Gusev, A. A.; Guseva, E. M.; Pavlov, V. M.; Skorkina, A. A.

    2018-01-01

    The average relationship is established between the basic magnitude for the Kamchatka regional catalog, M L , and modern moment magnitude M w. The latter is firmly tied to the value of the source seismic moment M 0 which has a direct physical meaning. M L magnitude is not self-reliant but is obtained through the conversion of the traditional Fedotov's S-wave energy class, K S1,2 F68 . Installation of the digital seismographic network in Kamchatka in 2006-2010 permitted mass estimates of M 0 and M w to be obtained from the regional data. In this paper we outline a number of techniques to estimate M 0 for the Kamchatka earthquakes using the waveforms of regional stations, and then compare the obtained M w estimates with each other and with M L , based on several hundred earthquakes that took place in 2010-2014. On the average, for M w = 3.0-6.0, M w = M L -0.40; this relationship allows obtaining M w estimates (proxy- M w) for a large part of the regional earthquake catalog with M L = 3.4-6.4 ( M w = 3.0-6.0).

  16. Three-dimensional topography of the gingival line of young adult maxillary teeth: curve averaging using reverse-engineering methods.

    Science.gov (United States)

    Park, Young-Seok; Chang, Mi-Sook; Lee, Seung-Pyo

    2011-01-01

    This study attempted to establish three-dimensional average curves of the gingival line of maxillary teeth using reconstructed virtual models to utilize as guides for dental implant restorations. Virtual models from 100 full-mouth dental stone cast sets were prepared with a three-dimensional scanner and special reconstruction software. Marginal gingival lines were defined by transforming the boundary points to the NURBS (nonuniform rational B-spline) curve. Using an iterative closest point algorithm, the sample models were aligned and the gingival curves were isolated. Each curve was tessellated by 200 points using a uniform interval. The 200 tessellated points of each sample model were averaged according to the index of each model. In a pilot experiment, regression and fitting analysis of one obtained average curve was performed to depict it as mathematical formulae. The three-dimensional average curves of six maxillary anterior teeth, two maxillary right premolars, and a maxillary right first molar were obtained, and their dimensions were measured. Average curves of the gingival lines of young people were investigated. It is proposed that dentists apply these data to implant platforms or abutment designs to achieve ideal esthetics. The curves obtained in the present study may be incorporated as a basis for implant component design to improve the biologic nature and related esthetics of restorations.

  17. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  18. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  19. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...

  20. Averaging, not internal noise, limits the development of coherent motion processing

    Directory of Open Access Journals (Sweden)

    Catherine Manning

    2014-10-01

    Full Text Available The development of motion processing is a critical part of visual development, allowing children to interact with moving objects and navigate within a dynamic environment. However, global motion processing, which requires pooling motion information across space, develops late, reaching adult-like levels only by mid-to-late childhood. The reasons underlying this protracted development are not yet fully understood. In this study, we sought to determine whether the development of motion coherence sensitivity is limited by internal noise (i.e., imprecision in estimating the directions of individual elements and/or global pooling across local estimates. To this end, we presented equivalent noise direction discrimination tasks and motion coherence tasks at both slow (1.5°/s and fast (6°/s speeds to children aged 5, 7, 9 and 11 years, and adults. We show that, as children get older, their levels of internal noise reduce, and they are able to average across more local motion estimates. Regression analyses indicated, however, that age-related improvements in coherent motion perception are driven solely by improvements in averaging and not by reductions in internal noise. Our results suggest that the development of coherent motion sensitivity is primarily limited by developmental changes within brain regions involved in integrating motion signals (e.g., MT/V5.

  1. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  2. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  3. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  4. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  5. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  6. Establishing Local Reference Dose Values and Optimisation Strategies

    International Nuclear Information System (INIS)

    Connolly, P.; Moores, B.M.

    2000-01-01

    The revised EC Patient Directive 97/43 EURATOM introduces the concepts of clinical audit, diagnostic reference levels and optimisation of radiation protection in diagnostic radiology. The application of reference dose levels in practice involves the establishment of reference dose values as actual measurable operational quantities. These values should then form part of an ongoing optimisation and audit programme against which routine performance can be compared. The CEC Quality Criteria for Radiographic Images provides guidance reference dose values against which local performance can be compared. In many cases these values can be improved upon quite considerably. This paper presents the results of a local initiative in the North West of the UK aimed at establishing local reference dose values for a number of major hospital sites. The purpose of this initiative is to establish a foundation for both optimisation strategies and clinical audit as an ongoing and routine practice. The paper presents results from an ongoing trial involving patient dose measurements for several radiological examinations upon the sites. The results of an attempt to establish local reference dose values from measured dose values and to employ them in optimisation strategies are presented. In particular emphasis is placed on the routine quality control programmes necessary to underpin this strategy including the effective data management of results from such programmes and how they can be employed to optimisation practices. (author)

  7. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  8. 7 CFR 3016.1 - Purpose and scope of this part.

    Science.gov (United States)

    2010-01-01

    ... AND LOCAL GOVERNMENTS General § 3016.1 Purpose and scope of this part. This part establishes uniform... 7 Agriculture 15 2010-01-01 2010-01-01 false Purpose and scope of this part. 3016.1 Section 3016.1 Agriculture Regulations of the Department of Agriculture (Continued) OFFICE OF THE CHIEF FINANCIAL OFFICER...

  9. Causality re-established.

    Science.gov (United States)

    D'Ariano, Giacomo Mauro

    2018-07-13

    Causality has never gained the status of a 'law' or 'principle' in physics. Some recent literature has even popularized the false idea that causality is a notion that should be banned from theory. Such misconception relies on an alleged universality of the reversibility of the laws of physics, based either on the determinism of classical theory, or on the multiverse interpretation of quantum theory, in both cases motivated by mere interpretational requirements for realism of the theory. Here, I will show that a properly defined unambiguous notion of causality is a theorem of quantum theory, which is also a falsifiable proposition of the theory. Such a notion of causality appeared in the literature within the framework of operational probabilistic theories. It is a genuinely theoretical notion, corresponding to establishing a definite partial order among events, in the same way as we do by using the future causal cone on Minkowski space. The notion of causality is logically completely independent of the misidentified concept of 'determinism', and, being a consequence of quantum theory, is ubiquitous in physics. In addition, as classical theory can be regarded as a restriction of quantum theory, causality holds also in the classical case, although the determinism of the theory trivializes it. I then conclude by arguing that causality naturally establishes an arrow of time. This implies that the scenario of the 'block Universe' and the connected 'past hypothesis' are incompatible with causality, and thus with quantum theory: they are both doomed to remain mere interpretations and, as such, are not falsifiable, similar to the hypothesis of 'super-determinism'.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).

  10. Provenance variations of scots pine (Pinus sylvestris L.) in the Southern PART of Turkey

    International Nuclear Information System (INIS)

    Gulcu, S.; Bilir, N.

    2015-01-01

    Tree height, basal diameter, stem form, number, angle and diameter of branches were assessed in eight-year-old provenance test established by 30 seed sources of Scots pine (Pinus sylvestris L.) at Aydogmus and Kemer experimental sites of Southern part of Turkey. Growth of the provenances was also compared to two native species (Taurus cedar- Cedrus libani A. Rich and Black pine-Pinus nigra Arnold.) of the region. Variations within provenance and among provenances, and relations among the traits were estimated. There were large differences (p <= 0.05) within provenance and among provenances for the traits, while sites showed similar (0.05 <= p) performance for tree height and stem form. For instance, average of tree height was 181 cm and varied between 138.3 cm and 229.8 cm in provenances of Aydogmus site, it was 184 cm and ranged from 130 cm to 246.1 cm in that of Kemer site. Averages of tree height of a provenance were 144.4 cm in Aydogmus and 194.5 cm in Kemer. Individual tree height of the provenance varied between 69 cm and 267 cm, and ranged from 51 cm to 280 cm in sites. Averages of tree height were 143.2 cm in Black pine 145.6 cm in Taurus cedar which were natural species of the region. There were mostly positive and significant (p <= 0.05) correlations among the traits. Results of the study were discussed for new plantations and breeding of the species. (author)

  11. Flow evolution of a turbulent submerged two-dimensional rectangular free jet of air. Average Particle Image Velocimetry (PIV) visualizations and measurements

    International Nuclear Information System (INIS)

    Gori, Fabio; Petracci, Ivano; Angelino, Matteo

    2013-01-01

    Highlights: • Zone of flow establishment contains a newly identified undisturbed region of flow. • In the undisturbed region of flow the velocity profile is similar to the exit one. • In undisturbed region of flow the height of average PIV visualizations is constant. • In the undisturbed region of flow the turbulence on the centerline is equal to exit. • Length of undisturbed region of flow decreases with Reynolds number increase. -- Abstract: The paper presents average flow visualizations and measurements, obtained with the Particle Image Velocimetry (PIV) technique, of a submerged rectangular free jet of air in the range of Reynolds numbers from Re = 35,300 to Re = 2200, where the Reynolds number is defined according to the hydraulic diameter of a rectangular slot of height H. According to the literature, just after the exit of the jet there is a zone of flow, called zone of flow establishment, containing the region of mixing fluid, at the border with the stagnant fluid, and the potential core, where velocity on the centerline maintains a value almost equal to the exit one. After this zone is present the zone of established flow or fully developed region. The goal of the paper is to show, with average PIV visualizations and measurements, that, before the zone of flow establishment is present a region of flow, never mentioned by the literature and called undisturbed region of flow, with a length, L U , which decreases with the increase of the Reynolds number. The main characteristics of the undisturbed region is the fact that the velocity profile maintains almost equal to the exit one, and can also be identified by a constant height of the average PIV visualizations, with length, L CH , or by a constant turbulence on the centerline, with length L CT . The average PIV velocity and turbulence measurements are compared to those performed with the Hot Film Anemometry (HFA) technique. The average PIV visualizations show that the region of constant height has

  12. Establishment of data base files of thermodynamic data developed by OECD/NEA. Part 4. Addition of thermodynamic data for iron, tin and thorium

    International Nuclear Information System (INIS)

    Yoshida, Yasushi; Kitamura, Akira

    2014-12-01

    Thermodynamic data for compounds and complexes of elements with auxiliary species specialized in modeling requirements for safety assessments of radioactive waste disposal systems have been developed by the Thermochemical Data Base (TDB) project of the Nuclear Energy Agency in the Organization for Economic Co-operation and Development (OECD/NEA). Recently, thermodynamic data for aqueous complexes, solids and gases of thorium, tin and iron (Part 1) have been published in 2008, 2012 and 2013, respectively. These thermodynamic data have been selected on the basis of NEA’s guidelines which describe peer review and data selection, extrapolation to zero ionic strength, assignment of uncertainty, and temperature correction; therefore the selected data are considered to be reliable. The reliability of selected thermodynamic data of TDB developed by Japan Atomic Energy Agency (JAEA-TDB) has been confirmed by comparing with selected data by the NEA. For this comparison, text files of the selected data on some geochemical calculation programs are required. In the present report, the database files for the NEA’s TDB with addition of selected data for iron, tin and thorium to the previous files have been established for use of PHREEQC, Geochemist’s Workbench and EQ3/6. In addition, as an example of confirmation of quality, dominant species in iron TDB were compared in Eh-pH diagram and differences between JAEA-TDB and NEA-TDB were shown. Data base files established in the present study will be at the Website of thermodynamic, sorption and diffusion database in JAEA (http://migrationdb.jaea.go.jp/). A CD-ROM is attached as an appendix. (J.P.N.)

  13. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...

  14. College grade point average as a personnel selection device: ethnic group differences and potential adverse impact.

    Science.gov (United States)

    Roth, P L; Bobko, P

    2000-06-01

    College grade point average (GPA) is often used in a variety of ways in personnel selection. Unfortunately, there is little empirical research literature in human resource management that informs researchers or practitioners about the magnitude of ethnic group differences and any potential adverse impact implications when using cumulative GPA for selection. Data from a medium-sized university in the Southeast (N = 7,498) indicate that the standardized average Black-White difference for cumulative GPA in the senior year is d = 0.78. The authors also conducted analyses at 3 GPA screens (3.00, 3.25, and 3.50) to demonstrate that employers (or educators) might face adverse impact at all 3 levels if GPA continues to be implemented as part of a selection system. Implications and future research are discussed.

  15. Monitoring process hygiene in Serbian retail establishments

    Science.gov (United States)

    Vesković Moračanin, S.; Baltić, T.; Milojević, L.

    2017-09-01

    The present study was conducted to estimate the effectiveness of sanitary procedures on food contact surfaces and food handlers’ hands in Serbian retail establishments. For that purpose, a total of 970 samples from food contact surfaces and 525 samples from workers’ hands were microbiologically analyzed. Results of total aerobic plate count and total Enterobacteriaceae count showed that the implemented washing and disinfection procedures, as a part of HACCP plans, were not effective enough in most retail facilities. Constant and intensive education of employees on proper implementation of sanitation procedures are needed in order to ensure food safety in the retail market.

  16. 77 FR 7109 - Establishment of User Fees for Filovirus Testing of Nonhuman Primate Liver Samples

    Science.gov (United States)

    2012-02-10

    ... quarantine standards under 42 CFR part 71 for continued registration as an importer of NHPs (2). On March 23... reagents requires a biosafety level 4 laboratory (BSL-4). A BSL- 4 laboratory is also required during part..., establishment of standards, and regulation. Full costs are determined based on the best available records of the...

  17. Tree Seedlings Establishment Across a Hydrologic Gradient in a Bottomland Restoration

    Science.gov (United States)

    Randall K. Kolka; Carl C. Trettin; E.A. Nelson; W.H. Conner

    1998-01-01

    Seedling establishment and survival on the Savannah River Site in South Carolina is being monitored as part of the Pen Branch Bottomland Restoration Project. Bottomland tree species were planted from 1993-1995 across a hydrologic gradient which encompasses the drier upper floodplain corridor, the lower floodplain corridor and the continuously inundated delta. Twelve...

  18. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  19. The Part-Time Pay Penalty for Women in Britain

    OpenAIRE

    Manning, Alan; Petrongolo, Barbara

    2007-01-01

    Women in Britain who work part-time have, on average, hourly earnings about 25% less than that of women working full-time. This gap has widened greatly over the past 30 years. This paper tries to explain this part-time pay penalty. It shows that a sizeable part of the penalty can be explained by the differing characteristics pf FT and PT women. Inclusion of standard demographics halves the estimate of the pay penalty. But inclusion of occupation makes the pay penalty very small, suggesting th...

  20. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  1. Social constructivism and positive epistemology: On establishing of scientific facts

    Directory of Open Access Journals (Sweden)

    Brdar Milan

    2009-01-01

    Full Text Available In this article author is dealing with the problem of genesis and establishing of scientific fact. In the first part is given philosophical treatment of the problem and its futility, for the reason given in Minhausen trilema as a conclusion of this kind of treatment. Second part of the article contains the review of Ludvik Fleck's positive epistemology and of social constructivism. After the short historical review of the roots of social constructivism in the Mannheim's sociology of knowledge, author describes how it goes with erstablishing of scientific fact. Solution is set forth by elaboration of triad physis-nomos-logos within the constitutive elements of society known as authority-institution-reason-discourse. Author's concluding thesis claims that there is no fact without convention, i.e. that nature and scientific fact are connected by institution (scientific or social. It does not mean that there is no sound established facts but that obstacle toward it is provided just with institutional mediation that make condition of possibility of very truth. Therefore, the question: what among known facts are really truthful amounts to eternal problem of science and common sense in every epoch of history. .

  2. Averaging hydraulic head, pressure head, and gravitational head in subsurface hydrology, and implications for averaged fluxes, and hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    G. H. de Rooij

    2009-07-01

    Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.

  3. DOE underground storage tank waste remediation chemical processing hazards. Part I: Technology dictionary

    International Nuclear Information System (INIS)

    DeMuth, S.F.

    1996-10-01

    This document has been prepared to aid in the development of Regulating guidelines for the Privatization of Hanford underground storage tank waste remediation. The document has been prepared it two parts to facilitate their preparation. Part II is the primary focus of this effort in that it describes the technical basis for established and potential chemical processing hazards associated with Underground Storage Tank (UST) nuclear waste remediation across the DOE complex. The established hazards involve those at Sites for which Safety Analysis Reviews (SARs) have already been prepared. Potential hazards are those involving technologies currently being developed for future applications. Part I of this document outlines the scope of Part II by briefly describing the established and potential technologies. In addition to providing the scope, Part I can be used as a technical introduction and bibliography for Regulatory personnel new to the UST waste remediation, and in particular Privatization effort. Part II of this document is not intended to provide examples of a SAR Hazards Analysis, but rather provide an intelligence gathering source for Regulatory personnel who must eventually evaluate the Privatization SAR Hazards Analysis

  4. The impact of intermediate structure on the average fission cross sections

    International Nuclear Information System (INIS)

    Bouland, O.; Lynn, J.E.; Talou, P.

    2014-01-01

    This paper discusses two common approximations used to calculate average fission cross sections over the compound energy range: the disregard of the W II factor and the Porter-Thomas hypothesis made on the double barrier fission width distribution. By reference to a Monte Carlo-type calculation of formal R-matrix fission widths, this work estimates an overall error ranging from 12% to 20% on the fission cross section in the case of the 239 Pu fissile isotope in the energy domain from 1 to 100 keV with very significant impact on the competing capture cross section. This work is part of a recent and very comprehensive formal R-matrix study over the Pu isotope series and is able to give some hints for significant accuracy improvements in the treatment of the fission channel. (authors)

  5. The average carbon-stock approach for small-scale CDM AR projects

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Quijano, J.F.; Muys, B. [Katholieke Universiteit Leuven, Laboratory for Forest, Nature and Landscape Research, Leuven (Belgium); Schlamadinger, B. [Joanneum Research Forschungsgesellschaft mbH, Institute for Energy Research, Graz (Austria); Emmer, I. [Face Foundation, Arnhem (Netherlands); Somogyi, Z. [Forest Research Institute, Budapest (Hungary); Bird, D.N. [Woodrising Consulting Inc., Belfountain, Ontario (Canada)

    2004-06-15

    In many afforestation and reforestation (AR) projects harvesting with stand regeneration forms an integral part of the silvicultural system and satisfies local timber and/or fuelwood demand. Especially clear-cut harvesting will lead to an abrupt and significant reduction of carbon stocks. The smaller the project, the more significant the fluctuations of the carbon stocks may be. In the extreme case a small-scale project could consist of a single forest stand. In such case, all accounted carbon may be removed during a harvesting operation and the time-path of carbon stocks will typically look as in the hypothetical example presented in the report. For the aggregate of many such small-scale projects there will be a constant benefit to the atmosphere during the projects, due to averaging effects.

  6. Dispersion microclimatology of the Whiteshell Nuclear Research Establishment: 1964-1976

    International Nuclear Information System (INIS)

    Davis, P.A.; Reimer, A.

    1980-10-01

    This report discusses the analysis of data collected on the meteorological tower at the Whiteshell Nuclear Research Establishment (WNRE) during the period 1964-1976. The time-averaged characteristics of wind speed, wind direction, temperature and atmospheric stability are described, and the implications which these chacteristics have for the dispersion of a contaminant released to the atmosphere from the WNRE site are discussed. A comparison of the present results with those of a previous two-year analysis of WNRE measurements suggests that a short-term climatology is sufficiently representative of long-term conditions to provide a reliable base for dispersion predictions. (auth)

  7. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  8. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  9. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  10. Average L-shell fluorescence, Auger, and electron yields

    International Nuclear Information System (INIS)

    Krause, M.O.

    1980-01-01

    The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization

  11. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  12. Genetic variation facilitates seedling establishment but not population growth rate of a perennial invader.

    Science.gov (United States)

    Li, Shou-Li; Vasemägi, Anti; Ramula, Satu

    2016-01-01

    Assessing the demographic consequences of genetic variation is fundamental to invasion biology. However, genetic and demographic approaches are rarely combined to explore the effects of genetic variation on invasive populations in natural environments. This study combined population genetics, demographic data and a greenhouse experiment to investigate the consequences of genetic variation for the population fitness of the perennial, invasive herb Lupinus polyphyllus. Genetic and demographic data were collected from 37 L. polyphyllus populations representing different latitudes in Finland, and genetic variation was characterized based on 13 microsatellite loci. Associations between genetic variation and population size, population density, latitude and habitat were investigated. Genetic variation was then explored in relation to four fitness components (establishment, survival, growth, fecundity) measured at the population level, and the long-term population growth rate (λ). For a subset of populations genetic variation was also examined in relation to the temporal variability of λ. A further assessment was made of the role of natural selection in the observed variation of certain fitness components among populations under greenhouse conditions. It was found that genetic variation correlated positively with population size, particularly at higher latitudes, and differed among habitat types. Average seedling establishment per population increased with genetic variation in the field, but not under greenhouse conditions. Quantitative genetic divergence (Q(ST)) based on seedling establishment in the greenhouse was smaller than allelic genetic divergence (F'(ST)), indicating that unifying selection has a prominent role in this fitness component. Genetic variation was not associated with average survival, growth or fecundity measured at the population level, λ or its variability. The study suggests that although genetic variation may facilitate plant invasions by

  13. Moving East: how the transnational tobacco industry gained entry to the emerging markets of the former Soviet Union-part I: establishing cigarette imports.

    Science.gov (United States)

    Gilmore, A B; McKee, M

    2004-06-01

    To identify British American Tobacco's (BAT) reasons for targeting the former Soviet Union following its collapse in 1991 and the initial strategies BAT used to enter the region. Analysis of tobacco industry documents held at the Guildford BAT archive. Desire to expand to new markets was based in part on the decline in old markets. The large population, proximity to China, scope to expand sales to women and, in Central Asia, a young population with high growth rates made the former Soviet Union particularly attractive. High consumption rates and unfilled demand caused by previous shortages offered potential for rapid returns on investment. A series of steps were taken to penetrate the markets with the initial focus on establishing imports. The documents suggest that BAT encouraged the use of aid money and barter trade to fund imports and directed the smuggling of cigarettes which graduated from an opportunistic strategy to a highly organised operation. In establishing a market presence, promotion of BAT's brands and corporate image were paramount, and used synonymously to promote both the cigarettes and the company. The tobacco industry targeted young people and women. It used the allure of western products to promote its brands and brand stretching and corporate imagery to pre-empt future marketing restrictions. BAT used the chaotic conditions in the immediate post-transition period in the former Soviet Union to exploit legislative loopholes and ensure illegal cigarette imports. Governments of countries targeted by the tobacco industry need to be aware of industry tactics and develop adequate tobacco control policies in order to prevent the exploitation of vulnerable populations. Marketing restrictions that focus on advertising without restricting the use of brand or company promotions will have a limited impact.

  14. Vibrations in force-and-mass disordered alloys in the average local-information transfer approximation. Application to Al-Ag

    International Nuclear Information System (INIS)

    Czachor, A.

    1979-01-01

    The configuration-averaged displacement-displacement Green's function, derived in the locator-based approximation accounting for average transfer of information on local coupling and mass, has been applied to study the force-and-mass-disorder induced modifications of phonon dispersion relations in substitutional alloys of cubic structures. In this approach the translational invariance condition is obeyed whereas damping is neglected. The force-disorder was found to lead to additional splitting of phonon curves besides that due to mass-disorder, even in the small impurity-concentration case; at larger concentrations the number of splits (frequency gaps) should be still greater. The use of a quasi-locator in the Green's function derivation allows one to partly reconcile the present results with those of the average t-matrix approximation. The experimentally observed splitting in the [100]T phonon dispersion curve for Al-Ag alloys has been interpreted in terms of the above theory and of a quasi-mass of heavy impurity atoms. (Author)

  15. Shrinkage calibration method for μPIM manufactured parts

    DEFF Research Database (Denmark)

    Quagliotti, Danilo; Tosello, Guido; Salaga, J.

    2016-01-01

    Five green and five sintered parts of a micro mechanical component, produced by micro powder injection moulding, were measured using an optical coordinate measuring machine. The aim was to establish a method for quality assurance of the final produced parts. Initially, the so called “green” parts...... were compared with the sintered parts (final products) calculating the percentage of shrinkage after sintering. Successively, the expanded uncertainty of the measured dimensions were evaluated for each single part as well as for the overall parts. Finally, the estimated uncertainty for the shrinkage...... was evaluated propagating the expanded uncertainty previously stated and considering green and sintered parts correlated. Results showed that the proposed method can be effective instating tolerances if it is assumed that the variability on the dimensions induced by the shrinkage equals the propagated expanded...

  16. 78 FR 6762 - Food and Drug Administration Food Safety Modernization Act: Proposed Rules To Establish Standards...

    Science.gov (United States)

    2013-01-31

    .... For example, applying the concept of Hazard Analysis and Critical Control Point (HACCP) that was pioneered by industry in the late 1960s, FDA established HACCP-based regulations for seafood (21 CFR part... Service instituted HACCP-based rules for meat and poultry (9 CFR part 417) (61 FR 38806, July 25, 1996...

  17. The Average Network Flow Problem: Shortest Path and Minimum Cost Flow Formulations, Algorithms, Heuristics, and Complexity

    Science.gov (United States)

    2012-09-13

    46, 1989. [75] S. Melkote and M.S. Daskin . An integrated model of facility location and transportation network design. Transportation Research Part A ... a work of the U.S. Government and is not subject to copyright protection in the United States. AFIT/DS/ENS/12-09 THE AVERAGE NETWORK FLOW PROBLEM...focused thinking (VFT) are used sparingly, as is the case across the entirety of the supply chain literature. We provide a VFT tutorial for supply chain

  18. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  19. Alternative strategies for Medicare payment of outpatient prescription drugs--Part B and beyond.

    Science.gov (United States)

    Danzon, Patricia M; Wilensky, Gail R; Means, Kathleen E

    2005-03-01

    Reimbursement options for pharmaceuticals reimbursed under Medicare Part B (physician-dispensed drugs) are changing and the new comprehensive Part D Medicare outpatient drug benefit brings further changes. The Medicare Prescription Drug, Improvement and Modernization Act of 2003 (MMA) replaces traditional policy, of reimbursing Part B drugs at 95% of average wholesale price (AWP, a list price), with a percentage markup over the manufacturer's average selling price; in 2005 an indirect competitive procurement option will be introduced. In our view, although AWP-based reimbursement has been fraught with problems in the past, these could be fixed by constraining growth in AWP and periodically adjusting the discount off AWP. With these revisions, an AWP-based rule would preserve incentives for competitive discounting and deliver savings to Medicare. By contrast, basing Medicare reimbursement on a manufacturer's average selling price undermines incentives for discounting and, like any cost-based reimbursement rule, may result in higher prices to both public and private purchasers. Indirect competitive procurement for drugs alone, using specialty pharmacies, pharmacy benefit managers, or prescription drug plans, is unlikely to constrain costs to acceptable levels unless contractors retain flexibility to use standard benefit management tools. Folding Part B and Part D into comprehensive contracting with health plans for full health services is likely to offer the most efficient approach to managing the drug benefit.

  20. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  1. 36 CFR 1281.3 - What definitions apply to this part?

    Science.gov (United States)

    2010-07-01

    ... part, the term means operating equipment that must be furnished with the new library and included in the calculation of the required endowment. Operating equipment is fundamental to the operation of the...) and established as part of the system of Presidential libraries managed by NARA. Facility operations...

  2. Establishment of limits of detection and decision

    International Nuclear Information System (INIS)

    Mende, O.; Michel, R.

    1995-01-01

    The purpose of this project was to develop and test procedures to establish limits of decision and detection for spectrometric nuclear radiation measurements. Beside the determination of the limits of application of DIN 25482 part 2 and 5 - both primarily suitable for high resoluted spectra areas -the statistical model was expanded in such a way that henceforth blanks and influences of sample treatment can also be taken into account; the corresponding procedures to calculate the limits of decision and detection have a high precision. Additional procedures of calculation were developed to take the special characteristics of the analysis of complex spectra areas into account. (orig.) [de

  3. Establishment of quality assessment standard for mammographic equipment: evaluation of phantom and clinical images

    International Nuclear Information System (INIS)

    Lee, Sung Hoon; Choe, Yeon Hyeon; Chung, Soo Young

    2005-01-01

    The purpose of this study was to establish a quality standard for mammographic equipment Korea and to eventually improve mammographic quality in clinics and hospitals throughout Korea by educating technicians and clinic personnel. For the phantom test and on site assessment, we visited 37 sites and examined 43 sets of mammographic equipment. Items that were examined include phantom test, radiation dose measurement, developer assessment, etc. The phantom images were assessed visually and by optical density measurements. For the clinical image assessment, clinical images from 371 sites were examined following the new Korean standard for clinical image evaluation. The items examined include labeling, positioning, contrast, exposure, artifacts, collimation among others. Quality standard of mammographic equipment was satisfied in all equipment on site visits. Average mean glandular dose was 114.9 mRad. All phantom image test scores were over 10 points (average, 10.8 points). However, optical density measurements were below 1.2 in 9 sets of equipment (20.9%). Clinical image evaluation revealed appropriate image quality in 83.5%, while images from non-radiologist clinics were adequate in 74.6% (91/122), which was the lowest score of any group. Images were satisfactory in 59.0% (219/371) based on evaluation by specialists following the new Korean standard for clinical image evaluation. Satisfactory images had a mean score of 81.7 (1 S.D. =8.9) and unsatisfactory images had a mean score of 61.9 (1 S.D = 11). The correlation coefficient between the two observers was 0.93 (ρ < 0.01) in 49 consecutive cases. The results of the phantom tests suggest that optical density measurements should be performed as part of a new quality standard for mammographic equipment. The new clinical evaluation criteria that was used in this study can be implemented with some modifications for future mammography quality control by the Korean government

  4. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  5. Electrical method for the measurements of volume averaged electron density and effective coupled power to the plasma bulk

    Science.gov (United States)

    Henault, M.; Wattieaux, G.; Lecas, T.; Renouard, J. P.; Boufendi, L.

    2016-02-01

    Nanoparticles growing or injected in a low pressure cold plasma generated by a radiofrequency capacitively coupled capacitive discharge induce strong modifications in the electrical parameters of both plasma and discharge. In this paper, a non-intrusive method, based on the measurement of the plasma impedance, is used to determine the volume averaged electron density and effective coupled power to the plasma bulk. Good agreements are found when the results are compared to those given by other well-known and established methods.

  6. Establishment of prairies

    International Nuclear Information System (INIS)

    Lotero Cadavid, J.

    2001-01-01

    Are analyzed the establishment of prairies, such as the selection of the species, the factors of the environment, the impact in the establishment and forage production and its relation to the soil, the precipitation, the temperature, the light and the biotic factors. It is indicated that the selection of the species to settle down, is directly related with the climate and the soil and they group to be tolerant to drought, tolerant to flood soils, tolerant to humid soils, tolerant to soils very acids, moderately acids and saline. It is noticed that a bad establishment of the grasses can be due to the bad quality of the seed, a temperature and unfavorable humidity can cause low germination; equally seeds planted very deeply in heavy soils with excess of humidity. Considerations are made about the establishment and growth of the prairies in connection with the germination, cultures, sowing density and sowing on time, as well as for the soil preparation, the sowing in terrestrial mechanic and non mechanic and the use of cultivations forms of low cost and fertilization systems; equally the establishment of leguminous in mixture with gramineous, the renovation of prairies and the establishment of pastures

  7. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  8. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  9. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any

  10. Principles of resonance-averaged gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1981-01-01

    The unambiguous determination of excitation energies, spins, parities, and other properties of nuclear levels is the paramount goal of the nuclear spectroscopist. All developments of nuclear models depend upon the availability of a reliable data base on which to build. In this regard, slow neutron capture gamma-ray spectroscopy has proved to be a valuable tool. The observation of primary radiative transitions connecting initial and final states can provide definite level positions. In particular the use of the resonance-averaged capture technique has received much recent attention because of the claims advanced for this technique (Chrien 1980a, Casten 1980); that it is able to identify all states in a given spin-parity range and to provide definite spin parity information for these states. In view of the importance of this method, it is perhaps surprising that until now no firm analytical basis has been provided which delineates its capabilities and limitations. Such an analysis is necessary to establish the spin-parity assignments derived from this method on a quantitative basis; in other words a quantitative statement of the limits of error must be provided. It is the principal aim of the present paper to present such an analysis. To do this, a historical description of the technique and its applications is presented and the principles of the method are stated. Finally a method of statistical analysis is described, and the results are applied to recent measurements carried out at the filtered beam facilities at the Brookhaven National Laboratory

  11. Structure-activity relationships of pyrethroid insecticides. Part 2. The use of molecular dynamics for conformation searching and average parameter calculation

    Science.gov (United States)

    Hudson, Brian D.; George, Ashley R.; Ford, Martyn G.; Livingstone, David J.

    1992-04-01

    Molecular dynamics simulations have been performed on a number of conformationally flexible pyrethroid insecticides. The results indicate that molecular dynamics is a suitable tool for conformational searching of small molecules given suitable simulation parameters. The structures derived from the simulations are compared with the static conformation used in a previous study. Various physicochemical parameters have been calculated for a set of conformations selected from the simulations using multivariate analysis. The averaged values of the parameters over the selected set (and the factors derived from them) are compared with the single conformation values used in the previous study.

  12. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  13. 7 CFR 1437.11 - Average market price and payment factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Average market price and payment factors. 1437.11... ASSISTANCE PROGRAM General Provisions § 1437.11 Average market price and payment factors. (a) An average... average market price by the applicable payment factor (i.e., harvested, unharvested, or prevented planting...

  14. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  15. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  16. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  17. A Beginner's Guide to BASIC Programming, Part 2.

    Science.gov (United States)

    Hughes, Elizabeth

    1982-01-01

    Discusses a number of important structures which can be used in programming with BASIC, including loops, subroutines, and arrays. To illustrate these structures, a simple grade-averaging program is presented and explained. Commands introduced in Part 1 of the guide are listed in a table. (JL)

  18. Study of the Continuous Improvement Trend for Health, Safety and Environmental Indicators, after Establishment of Integrated Management System (IMS) in a Pharmaceutical Industry in Iran.

    Science.gov (United States)

    Mariouryad, Pegah; Golbabaei, Farideh; Nasiri, Parvin; Mohammadfam, Iraj; Marioryad, Hossein

    2015-10-01

    Nowadays, organizations try to improve their services and consequently adopt management systems and standards which have become key parts in various industries. One of these management systems which have been noticed in the recent years is Integrated Management System that is the combination of quality, health, safety and environment management systems. This study was conducted with the aim of evaluating the improvement trend after establishment of integrated management system for health, safety and environment indicators, in a pharmaceutical industry in Iran. First, during several inspections in different parts of the industry, indicators that should have been noted were listed and then these indicators were organized in 3 domains of health, safety and environment in the form of a questionnaire that followed Likert method of scaling. Also, the weight of each index was resulted from averaging out of 30 managers and the viewpoints of the related experts in the field. Moreover, by checking the documents and evidence of different years (5 contemplation years of this study), the score of each indicator was determined by multiplying the weight and score of the indices and were finally analysed. Over 5 years, scores of health scope indicators, increased from 161.99 to 202.23. Score in the first year after applying the integrated management system establishment was 172.37 in safety part and in the final year increased to 197.57. The changes of environmental scope rates, from the beginning of the program up to the last year increased from 49.24 to 64.27. Integrated management systems help organizations to improve programs to achieve their objectives. Although in this study all trends of health, safety and environmental indicator changes were positive, but at the same time showed to be slow. So, one can suggest that the result of an annual evaluation should be applied in planning future activities for the years ahead.

  19. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  20. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  1. A fiber orientation-adapted integration scheme for computing the hyperelastic Tucker average for short fiber reinforced composites

    Science.gov (United States)

    Goldberg, Niels; Ospald, Felix; Schneider, Matti

    2017-10-01

    In this article we introduce a fiber orientation-adapted integration scheme for Tucker's orientation averaging procedure applied to non-linear material laws, based on angular central Gaussian fiber orientation distributions. This method is stable w.r.t. fiber orientations degenerating into planar states and enables the construction of orthotropic hyperelastic energies for truly orthotropic fiber orientation states. We establish a reference scenario for fitting the Tucker average of a transversely isotropic hyperelastic energy, corresponding to a uni-directional fiber orientation, to microstructural simulations, obtained by FFT-based computational homogenization of neo-Hookean constituents. We carefully discuss ideas for accelerating the identification process, leading to a tremendous speed-up compared to a naive approach. The resulting hyperelastic material map turns out to be surprisingly accurate, simple to integrate in commercial finite element codes and fast in its execution. We demonstrate the capabilities of the extracted model by a finite element analysis of a fiber reinforced chain link.

  2. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  3. Career anchors and learning plan (part one

    Directory of Open Access Journals (Sweden)

    Daniela Brečko

    2006-12-01

    Full Text Available The article is divided into three parts. The first part concentrates on how important career is for an individual, organization and society. The author establishes that understanding of career has changed dramatically and does not only refer to climbing up the career ladder, but also moving off or even down the career ladder. The notion of career, as a lifelong and professional path, encompasses all aspects of human personality and their roles acquired through one's life. On basis of vast and longitudinal research, where the author has studied career anchors of individuals, it is the objective of the author to find out on basis of what grounds do the individuals decide to take certain directions in their careers and how learning contributes to such decisions. As a source the author has used Shein's theory of career anchors. Part one describes in greater detail 8 different career anchors and introduces their main features with the findings of the research, which refer to the analysis of professions (work positions and established career anchors. The author thus verifies the hypothesis that career anchors do exist in our area.

  4. Identity Establishment and Capability Based Access Control (IECAC) Scheme for Internet of Things

    DEFF Research Database (Denmark)

    Mahalle, Parikshit N.; Anggorojati, Bayu; Prasad, Neeli R.

    2012-01-01

    Internet of Things (IoT) become discretionary part of everyday life and could befall a threat if security is not considered before deployment. Authentication and access control in IoT is equally important to establish secure communication between devices. To protect IoT from man in middle, replay...... and denial of service attacks, the concept of capability for access control is introduced. This paper presents Identity establishment and capability based access control (IECAC) protocol using ECC (Elliptical Curve Cryptography) for IoT along with protocol evaluation, which protect against the aforementioned...

  5. Wavelength dependence of the effects of turbulence on average refraction angles in occultations by planetary atmospheres

    International Nuclear Information System (INIS)

    Haugstad, B.S.; Eshleman, V.R.

    1979-01-01

    Two recent adjacently published papers on the average effects of turbulence in radio and optical occultation studies of planetary atmospheres appear to disagree on the question of wavelength dependence. It is demonstrated here that in deriving a necessary condition for the applicability of their method. Hubbard and Jokipii neglect a factor which is proportional to the square of the ratio of the atmospheric or local Fresnel zone radius and the inner scale of turbulence. They also fail to establish sufficient conditions, thereby omitting as a further factor the square of the ratio of atmospheric scale height and the local Fresnel zone radius. The total descrepancy, which numerically is typically within several orders of magnitude of 10 11 for radio and 10 7 for optical occultations, means that their results correspond to geometrical optics and not to wave optics as claimed. Thus their results are inherently inapplicable in a discussion of the wavelength dependence of any parameter, such as the bias in the average refraction angle treated by Eshleman and Haugstad. We note that for power spectra characterized by the (--p) exponent of the turbulence wavenumber, the average turbulence-induced bias in refraction angles depends on the radiation wavelength as lambda/sup( p/--4)/2, or as lambda/sup en-dash1/6/ for Kolmogorov turbulence. Other features of the Hubbard-Jokipii analysis are also discussed

  6. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  7. Salecker-Wigner-Peres clock and average tunneling times

    International Nuclear Information System (INIS)

    Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.

    2011-01-01

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  8. Average wind statistics for SRP area meteorological towers

    International Nuclear Information System (INIS)

    Laurinat, J.E.

    1987-01-01

    A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics

  9. Control of underactuated driftless systems using higher-order averaging theory

    OpenAIRE

    Vela, Patricio A.; Burdick, Joel W.

    2003-01-01

    This paper applies a recently developed "generalized averaging theory" to construct stabilizing feedback control laws for underactuated driftless systems. These controls exponentialy stabilize in the average; the actual system may orbit around the average. Conditions for which the orbit collapses to the averaged trajectory are given. An example validates the theory, demonstrating its utility.

  10. Relationship between the Retinal Nerve Fibre Layer (RNFL parameters and Visual field loss in established glaucoma patients in South Indian population

    Directory of Open Access Journals (Sweden)

    Elangovan Suma, Puri K Sanjeev

    2013-10-01

    Full Text Available Purpose: Optical coherence tomography (OCT and Scanning LASER polarimetry (GDX-VCC are newer techniques to analyse retinal nerve fibre loss in glaucoma. This study aims to evaluate the relationship between the Retinal Nerve Fibre Layer(RNFL parameters measured using Stratus-OCT and GDx-VCC and visual field loss by Octopus interzeag perimetry in established glaucoma patients in South Indian Population. Materials and methods: Prospectively planned cross sectional study of 67 eyes of 34 established glaucoma patients on medical management. The mean age of patients was 46.911 years (SD+13.531. A complete ophthalmic examination, automated perimetry with octopus interzeag 1-2-3 perimeter, retinal nerve fibre analysis with GDx VCC and Stratus OCT was done. The differences between the mean RNFL parameters in the presence or absence of field defects were evaluated. Results: The data analysed were mean deviation, loss variance, OCT total average nerve fibre thickness, GDX VCC- TSNIT average and Nerve fibre indicator (NFI.The data were split into two subgroups on the basis of presence or absence of visual field defect and analysed. The difference between the mean value of NFI between the subgroups was highly significant with a p value < 0.01.The OCT parameter Total average nerve fiber layer thickness differed significantly between the two subgroups (p value <0.05. The mean GDx TSNIT average did not differ significantly between the two subgroups. Conclusion: The total average nerve fibre thickness by OCT correlated better with visual field loss than the GDX TSNIT average .Among the GDx parameters, the NFI was found to be a better indicator of visual field damage than the average thickness.

  11. Cubby : Multiscreen Desktop VR Part II

    NARCIS (Netherlands)

    Gribnau, M.W.; Djajadiningrat, J.P.

    2000-01-01

    In this second part of our 'Cubby: Multiscreen Desktop VR' trilogy, we will introduce you to the art of creating a driver to read an Origin Instruments Dynasight input device. With the Dynasight, the position of the head of the user is established so that Cubby can display the correct images on its

  12. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  13. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  14. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  15. Flow and transport simulation of Madeira River using three depth-averaged two-equation turbulence closure models

    Directory of Open Access Journals (Sweden)

    Li-ren Yu

    2012-03-01

    Full Text Available This paper describes a numerical simulation in the Amazon water system, aiming to develop a quasi-three-dimensional numerical tool for refined modeling of turbulent flow and passive transport of mass in natural waters. Three depth-averaged two-equation turbulence closure models, k˜−ε˜,k˜−w˜, and k˜−ω˜ , were used to close the non-simplified quasi-three dimensional hydrodynamic fundamental governing equations. The discretized equations were solved with the advanced multi-grid iterative method using non-orthogonal body-fitted coarse and fine grids with collocated variable arrangement. Except for steady flow computation, the processes of contaminant inpouring and plume development at the beginning of discharge, caused by a side-discharge of a tributary, have also been numerically investigated. The three depth-averaged two-equation closure models are all suitable for modeling strong mixing turbulence. The newly established turbulence models such as the k˜−ω˜ model, with a higher order of magnitude of the turbulence parameter, provide a possibility for improving computational precision.

  16. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  17. Establishing a 'track record': research productivity and nursing academe.

    Science.gov (United States)

    Emden, C

    1998-01-01

    Many nursing academics in Australia are finding to their dismay that an outstanding teaching career and exemplary professional contribution to their field--and a PhD--are not enough to achieve promotion within their university, or secure a new academic post. One must also possess a proven or established 'track record' in research and publication. The operational funding arrangements for Australian universities rely in part on the research productivity of their academic staff members. This places special expectation upon the way academics conduct their scholarly work. Nursing academics are under particular pressure: as relative newcomers to the university scene, most find themselves considered as early career researchers with weak track records. This paper reviews relevant research and draws upon personal experience in the area of research development, to highlight how nursing academics may most strategically establish a research and publication record with a view to career advancement.

  18. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  19. Avoid the Pitfalls: Benefits of Formal Part C Data System Governance. Revised

    Science.gov (United States)

    Mauzy, Denise; Bull, Bruce; Gould, Tate

    2016-01-01

    Since the initial authorizing legislation for Part C of the Individuals with Disabilities Education Act (IDEA) in 1986, the scope and complexity of data collected by Part C programs have significantly increased. Formal governance establishes responsibility for Part C data and enables program staff to improve the effectiveness of data processes and…

  20. Part-time work among pediatricians expands.

    Science.gov (United States)

    Cull, William L; O'Connor, Karen G; Olson, Lynn M

    2010-01-01

    The objective of this study was to track trends in part-time employment among pediatricians from 2000 to 2006 and to examine differences within subgroups of pediatricians. As part of the Periodic Survey of Fellows, national random samples of American Academy of Pediatrics members were surveyed in 2000, 2003, and 2006. These surveys shared questions concerning working part-time and other practice characteristics. Roughly 1600 pediatricians were included in each random sample. Totals of 812 (51%), 1020 (63%), and 1013 (62%) pediatricians completed the surveys in 2000, 2003, and 2006, respectively. Analyses were limited to nonretired, posttrainee pediatricians. The number of pediatricians who reported that they work part-time increased from 15% in 2000, to 20% in 2003, to 23% in 2006. The pattern of increased part-time work from 2000 to 2006 held for many subgroups, including men, women, pediatricians who were younger than 40 years, pediatricians who were aged >or=50 years, pediatricians who worked in an urban inner city, pediatricians who worked in suburban areas, general pediatricians, and subspecialist pediatricians. Those who were working part-time were more satisfied within their professional and personal activities. Part-time pediatricians worked on average 14.3 fewer hours per week in direct patient care. Increases in part-time work are apparent throughout pediatrics. The possible continued growth of part-time is an important trend within the field of pediatrics that will need to be monitored.

  1. 78 FR 12967 - Establishment of Class A TV Service and Cable Television Rate Regulation; Cost of Service Rules...

    Science.gov (United States)

    2013-02-26

    ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Parts 73 and 76 [MM Docket No. 00-10; FCC 01-123 and MM Docket No. 93-215; FCC 95-502] Establishment of Class A TV Service and Cable Television Rate Regulation... Federal Communications Commission published requirements related to Establishment of Class A TV Service...

  2. Development and significance of a fetal electrocardiogram recorded by signal-averaged high-amplification electrocardiography.

    Science.gov (United States)

    Hayashi, Risa; Nakai, Kenji; Fukushima, Akimune; Itoh, Manabu; Sugiyama, Toru

    2009-03-01

    Although ultrasonic diagnostic imaging and fetal heart monitors have undergone great technological improvements, the development and use of fetal electrocardiograms to evaluate fetal arrhythmias and autonomic nervous activity have not been fully established. We verified the clinical significance of the novel signal-averaged vector-projected high amplification ECG (SAVP-ECG) method in fetuses from 48 gravidas at 32-41 weeks of gestation and in 34 neonates. SAVP-ECGs from fetuses and newborns were recorded using a modified XYZ-leads system. Once noise and maternal QRS waves were removed, the P, QRS, and T wave intervals were measured from the signal-averaged fetal ECGs. We also compared fetal and neonatal heart rates (HRs), coefficients of variation of heart rate variability (CV) as a parasympathetic nervous activity, and the ratio of low to high frequency (LF/HF ratio) as a sympathetic nervous activity. The rate of detection of a fetal ECG by SAVP-ECG was 72.9%, and the fetal and neonatal QRS and QTc intervals were not significantly different. The neonatal CVs and LF/HF ratios were significantly increased compared with those in the fetus. In conclusion, we have developed a fetal ECG recording method using the SAVP-ECG system, which we used to evaluate autonomic nervous system development.

  3. Perceptions of part-time faculty by chairpersons of undergraduate health education programs.

    Science.gov (United States)

    Price, James H; Braun, Robert E; McKinney, Molly A; Thompson, Amy

    2011-11-01

    In recent years, it has become commonplace for universities to hire part-time and non-tenure track faculty to save money. This study examined how commonly part-time faculty are used in health education and how they are used to meet program needs. The American Association of Health Education's 2009 "Directory of Institutions Offering Undergraduate and Graduate Degree Programs in Health Education" was used to send a three-wave mailing to programs that were not schools of public health (n = 215). Of the 125 departments (58%) that responded, those that used part-time faculty averaged 7.5 part-time faculty in the previous academic year, teaching on average a total of 10 classes per year. A plurality of departments (38%) were currently using more part-time faculty than 10 years ago and 33% perceived that the number of part-time faculty has resulted in decreases in the number of full-time positions. Although 77% of department chairs claimed they would prefer to replace all of their part-time faculty with one full-time tenure track faculty member. As colleges downsize, many health education programs are using more part-time faculty. Those faculty members who take part-time positions will likely be less involved in academic activities than their full-time peers. Thus, further research is needed on the effects of these changes on the quality of health education training and department productivity.

  4. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  5. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  6. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  7. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  8. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  9. A Study on the Development of Building Energy Analysis Program and the Establishment of BEPS

    Energy Technology Data Exchange (ETDEWEB)

    Kong, S.R.; Kwon, K.J.; Yoo, Y.H.; Cho, Y.K.; Kijm, Y.D.; Han, S.W. [Korea Electric Power Corp. (KEPCO), Taejon (Korea, Republic of). Research Center; Kim, M.H.; Kim, K.W.; Cho, K.H.; Lee, H.W.; Lee, Y.H.; Kim, S.J.; Song, K.S.; Heon, C.T.; Choi, J.M.; Kim, Y.I.; Suk, H.T.; Kang, J.S.; Kim, Y.D.; Kang, K.T.; Lee, J.E.; Kwark, H.S. [Seoul National Univ. (Korea, Republic of)

    1994-12-31

    The amount of energy consumption in building covers the 30% of the total energy consumption and that of electricity is much the same. Because of the improvement of the living quality, energy consumption in building part is increasing and the rate is much higher than that of other parts. So, KEPCO, one of the major domestic energy suppliers and consumers, needs to develop reliable computerized building energy analysis program and to establish building energy performance standards for the reasonable energy management and the efficient execution of energy budget and the improvement of working condition of the corp`s buildings. So the study aims to the development of computerized building energy analysis program, and the establishment energy budget level and building energy performance standards for the corp`s buildings.

  10. Estimating the population dose from nuclear medicine examinations towards establishing diagnostic reference levels

    International Nuclear Information System (INIS)

    Niksirat, Fatemeh; Monfared, Ali Shabestani; Deevband, Mohammad Reza; Amiri, Mehrangiz; Gholami, Amir

    2016-01-01

    This study conducted a review on nuclear medicine (NM) services in Mazandaran Province with a view to establish adult diagnostic reference levels (DRLs) and provide updated data on population radiation exposure resulting from diagnostic NM procedures. The data were collected from all centers in all cities of Mazandaran Province in the North of Iran from March 2014 to February 2015. The 75 th percentile of the distribution and the average administered activity (AAA) were calculated and the average effective dose per examination, collective effective dose to the population and annual effective dose per capita were estimated using dose conversion factors. The gathered data were analyzed via SPSS (version 18) software using descriptive statistics. Based on the data of this study, the collective effective dose was 95.628 manSv, leading to a mean effective dose of 0.03 mSv per capita. It was also observed that the myocardial perfusion was the most common procedure (50%). The 75 th percentile of the distribution of administered activity (AA) represents the DRL. The AAA and the 75 th percentile of the distribution of AA are slightly higher than DRL of most European countries. Myocardial perfusion is responsible for most of the collective effective dose and it is better to establish national DRLs for myocardial perfusion and review some DRL values through the participation of NM specialists in the future

  11. Rotational Grazing System for Beef Cows on Dwarf Elephantgrass Pasture for Two Years after Establishment

    Directory of Open Access Journals (Sweden)

    M Mukhtar

    2011-01-01

    Full Text Available An intensive rotational grazing system for dwarf and late heading (DL elephant grass (Pennisetum purpureum Schumach pasture was examined in a summer period for two years following establishment. Four 0.05 of DL elephant grass pastures (20×25 m were established on May 2003. They were rotationally grazed for 1 week, followed by a 3-week rest period by three breeding or raising beef cattle for three and six cycles during the first and second years of establishment respectively. Before grazing, the plant height, leaf area index and the ratio of leaf blade to stem were at the highest, while tiller number increased and herbage mass tended to increase, except for the first grazing cycle both two years and for one paddock in the second year. Herbage consumption, the rate of herbage consumption and dry matter intake tended to decrease in three paddocks from the first to the third cycle in the first year, but increase as grazing occurred in the second year. Dry matter intake averaged 10.2-14.5 and 15.4–23.2 g DM/kg/live weight (LW/day over the four paddocks in the first and second year, respectively, and average daily gains were 0.09 and 0.35 kg/head/day in the first and second year respectively. The carrying capacities were estimated at 1,016 and 208 cow-days (CD/ha (annual total 1,224 CD/ha in the first year and 1,355 and 207 CD/ha (annual total 1,562 CD/ha in the second year. Thus, DL elephant grass pasture can expand the grazing period for beef cows for the following two-year establishment. (Animal Production 13(1:10-17 (2011 Key Words: dwarf elephant grass, herbage mass, plant characters, rotational grazing

  12. GI Joe or Average Joe? The impact of average-size and muscular male fashion models on men's and women's body image and advertisement effectiveness.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2010-06-01

    Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.

  13. Cardiometabolic disease risk and HIV status in rural South Africa : establishing a baseline

    NARCIS (Netherlands)

    Clark, Samuel J.; Gomez-Olive, F. Xavier; Houle, Brian; Thorogood, Margaret; Klipstein-Grobusch, Kerstin; Angotti, Nicole; Kabudula, Chodziwadziwa; Williams, Jill; Menken, Jane; Tollman, Stephen

    2015-01-01

    Background: To inform health care and training, resource and research priorities, it is essential to establish how non-communicable disease risk factors vary by HIV-status in high HIV burden areas; and whether long-term anti-retroviral therapy (ART) plays a modifying role. Methods: As part of a

  14. Establishing politically feasible water markets: a multi-criteria approach.

    Science.gov (United States)

    Ballestero, Enrique; Alarcón, Silverio; García-Bernabeu, Ana

    2002-08-01

    A multiple criteria decision-making (MCDM) model to simulate the establishment of water markets is developed. The environment is an irrigated area governed by a non-profit agency, which is responsible for water production, allocation, and pricing. There is a traditional situation of historical rights, average-cost pricing for water allocation, large quantities of water used, and inefficiency. A market-oriented policy could be implemented by accounting for ecological and political objectives such as saving groundwater and safeguarding historical rights while promoting economic efficiency. In this paper, a problem is solved by compromise programming, a multi-criteria technique based on the principles of Simonian logic. The model is theoretically developed and applied to the Lorca region in Spain near the Mediterranean Sea.

  15. Establishment of immunoradiometric assay for free prostate-specific antigen

    International Nuclear Information System (INIS)

    Ma Lianxue

    2009-01-01

    An immunoradiometric assay (IRMA) of free prostate specific antigen (F-PSA) in serum was established. One monoclonal antibody against total PSA (T-PSA) was coated on the plastic tubes, the other against F-PSA was labeled with 125 I. The sensitivity of assay was 0.04 μg/L (n=20, +2s), the CVs were 2.9%-4.0% for the intra-assay and 3.5%-10.5% for the inter-assay and the average recovery was 102.7%. The correlative equation comparing with the FPSA-RIA (CIS BIO) is y=0.965 1 χ -0.001 1, and r=0.996 4. This F-PSA IRMA is a sensitive and precise method in detecting F-PSA and fit for the vitro assay. (authors)

  16. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  17. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  18. Serpent-COREDAX analysis of CANDU-6 time-average model

    Energy Technology Data Exchange (ETDEWEB)

    Motalab, M.A.; Cho, B.; Kim, W.; Cho, N.Z.; Kim, Y., E-mail: yongheekim@kaist.ac.kr [Korea Advanced Inst. of Science and Technology (KAIST), Dept. of Nuclear and Quantum Engineering Daejeon (Korea, Republic of)

    2015-07-01

    COREDAX-2 is the nuclear core analysis nodal code that has adopted the Analytic Function Expansion Nodal (AFEN) methodology which has been developed in Korea. AFEN method outperforms in terms of accuracy compared to other conventional nodal methods. To evaluate the possibility of CANDU-type core analysis using the COREDAX-2, the time-average analysis code system was developed. The two-group homogenized cross-sections were calculated using Monte Carlo code, Serpent2. A stand-alone time-average module was developed to determine the time-average burnup distribution in the core for a given fuel management strategy. The coupled Serpent-COREDAX-2 calculation converges to an equilibrium time-average model for the CANDU-6 core. (author)

  19. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  20. Characterization of PLA parts made with AM process

    Science.gov (United States)

    Spina, Roberto; Cavalcante, Bruno; Lavecchia, Fulvio

    2018-05-01

    The main objective of the presented work is to evaluate the thermal behavior of Poly-lactic acid (PLA) parts made with a Fused Deposition Modelling (FDM) process. By using a robust framework for the testing sequence of PLA parts, with the aim of establishing a standard testing cycle for the optimization of the part performance and quality. The research involves study the materials before and after 3D printing. Two biodegradable PLA polymers are investigated, characterized by different colors (one black and the other transparent). The study starts with the examination of each polymeric material and measurements of its main thermal properties.

  1. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  2. Coping Strategies Applied to Comprehend Multistep Arithmetic Word Problems by Students with Above-Average Numeracy Skills and Below-Average Reading Skills

    Science.gov (United States)

    Nortvedt, Guri A.

    2011-01-01

    This article discusses how 13-year-old students with above-average numeracy skills and below-average reading skills cope with comprehending word problems. Compared to other students who are proficient in numeracy and are skilled readers, these students are more disadvantaged when solving single-step and multistep arithmetic word problems. The…

  3. Establishing meaningful cut points for online user ratings.

    Science.gov (United States)

    Hirschfeld, Gerrit; Thielsch, Meinald T

    2015-01-01

    Subjective perceptions of websites can be reliably measured with questionnaires. But it is unclear how such scores should be interpreted in practice, e.g. is an aesthetics score of 4 points on a seven-point-scale satisfactory? The current paper introduces a receiver-operating characteristic (ROC)-based methodology to establish meaningful cut points for the VisAWI (visual aesthetics of websites inventory) and its short form the VisAWI-S. In two studies we use users' global ratings (UGRs) and website rankings as anchors. A total of 972 participants took part in the studies which yielded similar results. First, one-item UGRs correlate highly with the VisAWI. Second, cut points on the VisAWI reliably differentiate between sites that are perceived as attractive versus unattractive. Third, these cut points are variable, but only within a certain range. Together the research presented here establishes a score of 4.5 on the VisAWI which is a reasonable goal for website designers and highlights the utility of the ROC methodology to derive relevant scores for rating scales.

  4. Models of invasion and establishment of African Mustard (Brassica tournefortii)

    Science.gov (United States)

    Berry, Kristin H.; Gowan, Timothy A.; Miller, David M.; Brooks, Matthew L.

    2015-01-01

    Introduced exotic plants can drive ecosystem change. We studied invasion and establishment ofBrassica tournefortii (African mustard), a noxious weed, in the Chemehuevi Valley, western Sonoran Desert, California. We used long-term data sets of photographs, transects for biomass of annual plants, and densities of African mustard collected at irregular intervals between 1979 and 2009. We suggest that African mustard may have been present in low numbers along the main route of travel, a highway, in the late 1970s; invaded the valley along a major axial valley ephemeral stream channel and the highway; and by 2009, colonized 22 km into the eastern part of the valley. We developed predictive models for invasibility and establishment of African mustard. Both during the initial invasion and after establishment, significant predictor variables of African mustard densities were surficial geology, proximity to the highway and axial valley ephemeral stream channel, and number of small ephemeral stream channels. The axial valley ephemeral stream channel was the most vulnerable of the variables to invasions. Overall, African mustard rapidly colonized and quickly became established in naturally disturbed areas, such as stream channels, where geological surfaces were young and soils were weakly developed. Older geological surfaces (e.g., desert pavements with soils 140,000 to 300,000 years old) were less vulnerable. Microhabitats also influenced densities of African mustard, with densities higher under shrubs than in the interspaces. As African mustard became established, the proportional biomass of native winter annual plants declined. Early control is important because African mustard can colonize and become well established across a valley in 20 yr.

  5. Average of delta: a new quality control tool for clinical laboratories.

    Science.gov (United States)

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  6. Electricity demand loads modeling using AutoRegressive Moving Average (ARMA) models

    Energy Technology Data Exchange (ETDEWEB)

    Pappas, S.S. [Department of Information and Communication Systems Engineering, University of the Aegean, Karlovassi, 83 200 Samos (Greece); Ekonomou, L.; Chatzarakis, G.E. [Department of Electrical Engineering Educators, ASPETE - School of Pedagogical and Technological Education, N. Heraklion, 141 21 Athens (Greece); Karamousantas, D.C. [Technological Educational Institute of Kalamata, Antikalamos, 24100 Kalamata (Greece); Katsikas, S.K. [Department of Technology Education and Digital Systems, University of Piraeus, 150 Androutsou Srt., 18 532 Piraeus (Greece); Liatsis, P. [Division of Electrical Electronic and Information Engineering, School of Engineering and Mathematical Sciences, Information and Biomedical Engineering Centre, City University, Northampton Square, London EC1V 0HB (United Kingdom)

    2008-09-15

    This study addresses the problem of modeling the electricity demand loads in Greece. The provided actual load data is deseasonilized and an AutoRegressive Moving Average (ARMA) model is fitted on the data off-line, using the Akaike Corrected Information Criterion (AICC). The developed model fits the data in a successful manner. Difficulties occur when the provided data includes noise or errors and also when an on-line/adaptive modeling is required. In both cases and under the assumption that the provided data can be represented by an ARMA model, simultaneous order and parameter estimation of ARMA models under the presence of noise are performed. The produced results indicate that the proposed method, which is based on the multi-model partitioning theory, tackles successfully the studied problem. For validation purposes the produced results are compared with three other established order selection criteria, namely AICC, Akaike's Information Criterion (AIC) and Schwarz's Bayesian Information Criterion (BIC). The developed model could be useful in the studies that concern electricity consumption and electricity prices forecasts. (author)

  7. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    International Nuclear Information System (INIS)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan

    2014-01-01

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval

  8. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    Science.gov (United States)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan

    2014-09-01

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.

  9. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    Energy Technology Data Exchange (ETDEWEB)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan [Department of Civil and Structural Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor (Malaysia)

    2014-09-12

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.

  10. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  11. Counting master integrals. Integration by parts vs. functional equations

    International Nuclear Information System (INIS)

    Kniehl, Bernd A.; Tarasov, Oleg V.

    2016-01-01

    We illustrate the usefulness of functional equations in establishing relationships between master integrals under the integration-by-parts reduction procedure by considering a certain two-loop propagator-type diagram as an example.

  12. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  13. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  14. FDTD calculation of whole-body average SAR in adult and child models for frequencies from 30 MHz to 3 GHz

    International Nuclear Information System (INIS)

    Wang Jianqing; Fujiwara, Osamu; Kodera, Sachiko; Watanabe, Soichi

    2006-01-01

    Due to the difficulty of the specific absorption rate (SAR) measurement in an actual human body for electromagnetic radio-frequency (RF) exposure, in various compliance assessment procedures the incident electric field or power density is being used as a reference level, which should never yield a larger whole-body average SAR than the basic safety limit. The relationship between the reference level and the whole-body average SAR, however, was established mainly based on numerical calculations for highly simplified human modelling dozens of years ago. Its validity is being questioned by the latest calculation results. In verifying the validity of the reference level with respect to the basic SAR limit for RF exposure, it is essential to have a high accuracy of human modelling and numerical code. In this study, we made a detailed error analysis in the whole-body average SAR calculation for the finite-difference time-domain (FDTD) method in conjunction with the perfectly matched layer (PML) absorbing boundaries. We derived a basic rule for the PML employment based on a dielectric sphere and the Mie theory solution. We then attempted to clarify to what extent the whole-body average SAR may reach using an anatomically based Japanese adult model and a scaled child model. The results show that the whole-body average SAR under the ICNIRP reference level exceeds the basic safety limit nearly 30% for the child model both in the resonance frequency and 2 GHz band

  15. FDTD calculation of whole-body average SAR in adult and child models for frequencies from 30 MHz to 3 GHz

    Energy Technology Data Exchange (ETDEWEB)

    Wang Jianqing [Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 (Japan); Fujiwara, Osamu [Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 (Japan); Kodera, Sachiko [Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 (Japan); Watanabe, Soichi [National Institute of Information and Communications Technology, Nukui-kitamachi, Koganei, Tokyo 184-8795 (Japan)

    2006-09-07

    Due to the difficulty of the specific absorption rate (SAR) measurement in an actual human body for electromagnetic radio-frequency (RF) exposure, in various compliance assessment procedures the incident electric field or power density is being used as a reference level, which should never yield a larger whole-body average SAR than the basic safety limit. The relationship between the reference level and the whole-body average SAR, however, was established mainly based on numerical calculations for highly simplified human modelling dozens of years ago. Its validity is being questioned by the latest calculation results. In verifying the validity of the reference level with respect to the basic SAR limit for RF exposure, it is essential to have a high accuracy of human modelling and numerical code. In this study, we made a detailed error analysis in the whole-body average SAR calculation for the finite-difference time-domain (FDTD) method in conjunction with the perfectly matched layer (PML) absorbing boundaries. We derived a basic rule for the PML employment based on a dielectric sphere and the Mie theory solution. We then attempted to clarify to what extent the whole-body average SAR may reach using an anatomically based Japanese adult model and a scaled child model. The results show that the whole-body average SAR under the ICNIRP reference level exceeds the basic safety limit nearly 30% for the child model both in the resonance frequency and 2 GHz band.

  16. Feasibility study on the establishment of the IAEA international nuclear university

    International Nuclear Information System (INIS)

    Lee, E. J.; Kim, Y. T.; Nam, Y. M. and others

    2002-09-01

    The purpose of this project is to support 2002-2003 the IAEA project D.4.0.2, facilitating education, training and research in nuclear science and related fields, especially for a feasibility study on the establishment of the Agency sponsored International Nuclear University. Through this project, the abstract principle for a feasibility study on the establishment of the Agency sponsored International Nuclear university, which contains the new concepts and its objectives, principles to achieve the objectives, its curriculum outline and operation system, suggested project activities, was developed and submitted to the Agency. The Korean proposal were presented several times at the IAEA meetings and other international meetings related nuclear human resources development for understanding the necessity of a feasibility study on the establishment of the Agency sponsored international nuclear university with Member States. And the Korean proposal included such as the organization of a worldwide network using information and communication technology among Merber States' research institutes and training/education centers, curriculum outline and operation system of the INU will be produced. Also for further cooperation of the IAEA INU project implementation with the Agency, hosting IAEA INIS mirror site, establishment of the RCA region office, establishment of the INTEC at the Korean Atomic Energy Research Institute, and advanced curriculum of nuclear technology linked with NT, BT, ET, IT were made progress as a part of conceptualizing of the IAEA project

  17. Fundamental equations for two-phase flow. Part 1: general conservation equations. Part 2: complement and remarks

    International Nuclear Information System (INIS)

    Delhaye, J.M.

    1968-12-01

    This report deals with the general equations of mass conservation, of momentum conservation, and energy conservation in the case of a two-phase flow. These equations are presented in several forms starting from integral equations which are assumed initially a priori. 1. Equations with local instantaneous variables, and interfacial conditions; 2. Equations with mean instantaneous variables in a cross-section, and practical applications: these equations include an important experimental value which is the ratio of the cross-section of passage of one phase to the total cross-section of a flow-tube. 3. Equations with a local statistical mean, and equations averaged over a period of time: A more advanced attempt to relate theory and experiment consists in taking the statistical averages of local equations. Equations are then obtained involving variables which are averaged over a period of time with the help of an ergodic assumption. 4. Combination of statistical averages and averages over a cross-section: in this study are considered the local variables averaged statistically, then averaged over the cross-section, and also the variables averaged over the section and then averaged statistically. 5. General equations concerning emulsions: In this case a phase exists in a locally very finely divided form. This peculiarity makes it possible to define a volume concentration, and to draw up equations which have numerous applications. - Certain points arising in the first part of this report concerning general mass conservation equations for two-phase flow have been completed and clarified. The terms corresponding to the interfacial tension have been introduced into the general equations. The interfacial conditions have thus been generalized. A supplementary step has still to be carried out: it has, in effect, been impossible to take the interfacial tension into account in the case of emulsions. It was then appeared interesting to compare this large group of fundamental

  18. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  19. Risk-informed Analytical Approaches to Concentration Averaging for the Purpose of Waste Classification

    International Nuclear Information System (INIS)

    Esh, D.W.; Pinkston, K.E.; Barr, C.S.; Bradford, A.H.; Ridge, A.Ch.

    2009-01-01

    Nuclear Regulatory Commission (NRC) staff has developed a concentration averaging approach and guidance for the review of Department of Energy (DOE) non-HLW determinations. Although the approach was focused on this specific application, concentration averaging is generally applicable to waste classification and thus has implications for waste management decisions as discussed in more detail in this paper. In the United States, radioactive waste has historically been classified into various categories for the purpose of ensuring that the disposal system selected is commensurate with the hazard of the waste such that public health and safety will be protected. However, the risk from the near-surface disposal of radioactive waste is not solely a function of waste concentration but is also a function of the volume (quantity) of waste and its accessibility. A risk-informed approach to waste classification for near-surface disposal of low-level waste would consider the specific characteristics of the waste, the quantity of material, and the disposal system features that limit accessibility to the waste. NRC staff has developed example analytical approaches to estimate waste concentration, and therefore waste classification, for waste disposed in facilities or with configurations that were not anticipated when the regulation for the disposal of commercial low-level waste (i.e. 10 CFR Part 61) was developed. (authors)

  20. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  1. Establishing correspondence in wood: the challenge and some solutions?

    Science.gov (United States)

    Courtin, Gerard M; Fairgrieve, Scott I

    2013-09-01

    Establishing correspondence between the upper portion of a white birch sapling, a suspected weapon, and a potential source from a stand of trees was posed to one of us (GMC). A bending force shattered the sapling, precluding physical matching. Three white birch saplings were taken from the same stand of trees in a similar manner. Correspondence was achieved by measuring the width of the annual rings along four radii from a disk cut above and below the break. The regression coefficient of the data from the two disks from the same sapling was r(2) = 0.95. Regressing the upper disk against the lower disk of two other saplings resulted in r(2) values of 0.26 and 0.17, respectively. The various characteristics that are confined to a wood stem as part of its normal process of growth can be used to eliminate candidate saplings and establish correspondence between two pieces of wood. © 2013 American Academy of Forensic Sciences.

  2. 29 CFR Appendix A to Subpart R of... - Guidelines for Establishing the Components of a Site-specific Erection Plan: Non-mandatory...

    Science.gov (United States)

    2010-07-01

    ... R of Part 1926 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH.... 1926, Subpt. R, App. A Appendix A to Subpart R of Part 1926—Guidelines for Establishing the Components...

  3. Establishment and assessment of an integrated citric acid-methane production process.

    Science.gov (United States)

    Xu, Jian; Chen, Yang-Qiu; Zhang, Hong-Jian; Bao, Jia-Wei; Tang, Lei; Wang, Ke; Zhang, Jian-Hua; Chen, Xu-Sheng; Mao, Zhong-Gui

    2015-01-01

    To solve the problem of extraction wastewater in citric acid industrial production, an improved integrated citric acid-methane production process was established in this study. Extraction wastewater was treated by anaerobic digestion and then the anaerobic digestion effluent (ADE) was stripped by air to remove ammonia. Followed by solid-liquid separation to remove metal ion precipitation, the supernatant was recycled for the next batch of citric acid fermentation, thus eliminating wastewater discharge and reducing water consumption. 130U/g glucoamylase was added to medium after inoculation and the recycling process performed for 10 batches. Fermentation time decreased by 20% in recycling and the average citric acid production (2nd-10th) was 145.9±3.4g/L, only 2.5% lower than that with tap water (149.6g/L). The average methane production was 292.3±25.1mL/g CODremoved and stable in operation. Excessive Na(+) concentration in ADE was confirmed to be the major challenge for the proposed process. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Estimation of average causal effect using the restricted mean residual lifetime as effect measure

    DEFF Research Database (Denmark)

    Mansourvar, Zahra; Martinussen, Torben

    2017-01-01

    with respect to their survival times. In observational studies where the factor of interest is not randomized, covariate adjustment is needed to take into account imbalances in confounding factors. In this article, we develop an estimator for the average causal treatment difference using the restricted mean...... residual lifetime as target parameter. We account for confounding factors using the Aalen additive hazards model. Large sample property of the proposed estimator is established and simulation studies are conducted in order to assess small sample performance of the resulting estimator. The method is also......Although mean residual lifetime is often of interest in biomedical studies, restricted mean residual lifetime must be considered in order to accommodate censoring. Differences in the restricted mean residual lifetime can be used as an appropriate quantity for comparing different treatment groups...

  5. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  6. Measurement of average radon gas concentration at workplaces

    International Nuclear Information System (INIS)

    Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.

    2003-01-01

    In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)

  7. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  8. Food labeling; nutrition labeling of standard menu items in restaurants and similar retail food establishments. Final rule.

    Science.gov (United States)

    2014-12-01

    To implement the nutrition labeling provisions of the Patient Protection and Affordable Care Act of 2010 (Affordable Care Act or ACA), the Food and Drug Administration (FDA or we) is requiring disclosure of certain nutrition information for standard menu items in certain restaurants and retail food establishments. The ACA, in part, amended the Federal Food, Drug, and Cosmetic Act (the FD&C Act), among other things, to require restaurants and similar retail food establishments that are part of a chain with 20 or more locations doing business under the same name and offering for sale substantially the same menu items to provide calorie and other nutrition information for standard menu items, including food on display and self-service food. Under provisions of the ACA, restaurants and similar retail food establishments not otherwise covered by the law may elect to become subject to these Federal requirements by registering every other year with FDA. Providing accurate, clear, and consistent nutrition information, including the calorie content of foods, in restaurants and similar retail food establishments will make such nutrition information available to consumers in a direct and accessible manner to enable consumers to make informed and healthful dietary choices.

  9. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  10. High Lassa Fever activity in Northern part of Edo State, Nigeria ...

    African Journals Online (AJOL)

    The purpose was to establish simple statistics of the effects of lassa fever in northern part of Edo State, Nigeria. Lassa fever activity in the northern part of Edo state, Nigeria, was confirmed in 2004 by laboratory analysis of samples sent to Bernhard–Nocht Institute (BNI) for Tropical Medicine Hamburg, Germany.

  11. Effect of tank geometry on its average performance

    Science.gov (United States)

    Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.

    2018-03-01

    The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.

  12. Duty periods for establishing eligibility for health care. Final rule.

    Science.gov (United States)

    2013-12-26

    The Department of Veterans Affairs (VA) is amending its medical regulations concerning eligibility for health care to re-establish the definitions of "active military, naval, or air service,'' "active duty,'' and "active duty for training.'' These definitions were deleted in 1996; however, we believe that all duty periods should be defined in part 17 of the Code of Federal Regulations (CFR) to ensure proper determination of eligibility for VA health care. We are also providing a more complete definition of "inactive duty training.''

  13. An alternative scheme of the Bogolyubov's average method

    International Nuclear Information System (INIS)

    Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.

    1990-01-01

    In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)

  14. Greater-than-Class C low-level waste characterization. Appendix I: Impact of concentration averaging low-level radioactive waste volume projections

    International Nuclear Information System (INIS)

    Tuite, P.; Tuite, K.; O'Kelley, M.; Ely, P.

    1991-08-01

    This study provides a quantitative framework for bounding unpackaged greater-than-Class C low-level radioactive waste types as a function of concentration averaging. The study defines the three concentration averaging scenarios that lead to base, high, and low volumetric projections; identifies those waste types that could be greater-than-Class C under the high volume, or worst case, concentration averaging scenario; and quantifies the impact of these scenarios on identified waste types relative to the base case scenario. The base volume scenario was assumed to reflect current requirements at the disposal sites as well as the regulatory views. The high volume scenario was assumed to reflect the most conservative criteria as incorporated in some compact host state requirements. The low volume scenario was assumed to reflect the 10 CFR Part 61 criteria as applicable to both shallow land burial facilities and to practices that could be employed to reduce the generation of Class C waste types

  15. Influence of coma aberration on aperture averaged scintillations in oceanic turbulence

    Science.gov (United States)

    Luo, Yujuan; Ji, Xiaoling; Yu, Hong

    2018-01-01

    The influence of coma aberration on aperture averaged scintillations in oceanic turbulence is studied in detail by using the numerical simulation method. In general, in weak oceanic turbulence, the aperture averaged scintillation can be effectively suppressed by means of the coma aberration, and the aperture averaged scintillation decreases as the coma aberration coefficient increases. However, in moderate and strong oceanic turbulence the influence of coma aberration on aperture averaged scintillations can be ignored. In addition, the aperture averaged scintillation dominated by salinity-induced turbulence is larger than that dominated by temperature-induced turbulence. In particular, it is shown that for coma-aberrated Gaussian beams, the behavior of aperture averaged scintillation index is quite different from the behavior of point scintillation index, and the aperture averaged scintillation index is more suitable for characterizing scintillations in practice.

  16. Development of high average power industrial Nd:YAG laser with peak power of 10 kW class

    International Nuclear Information System (INIS)

    Kim, Cheol Jung; Kim, Jeong Mook; Jung, Chin Mann; Kim, Soo Sung; Kim, Kwang Suk; Kim, Min Suk; Cho, Jae Wan; Kim, Duk Hyun

    1992-03-01

    We developed and commercialized an industrial pulsed Nd:YAG laser with peak power of 10 kW class for fine cutting and drilling applications. Several commercial models have been investigated in design and performance. We improved its quality to the level of commercial Nd:YAG laser by an endurance test for each parts of laser system. The maximum peak power and average power of our laser were 10 kW and 250 W, respectively. Moreover, the laser pulse width could be controlled from 0.5 msec to 20 msec continuously. Many optical parts were localized and lowered much in cost. Only few parts were imported and almost 90% in cost were localized. Also, to accellerate the commercialization by the joint company, the training and transfer of technology were pursued in the joint participation in design and assembly by company researchers from the early stage. Three Nd:YAG lasers have been assembled and will be tested in industrial manufacturing process to prove the capability of developed Nd:YAG laser with potential users. (Author)

  17. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  18. Factors affecting spruce establishment and recruitment near western treeline, Alaska

    Science.gov (United States)

    Miller, A. E.; Sherriff, R.; Wilson, T. L.

    2015-12-01

    Regional warming and increases in tree growth are contributing to increased productivity near the western forest margin in Alaska. The effects of warming on seedling recruitment has received little attention, in spite of forecasted forest expansion near western treeline. Here, we used stand structure and environmental data from white spruce (Picea glauca) stands (n = 95) sampled across a longitudinal gradient to explore factors influencing white spruce growth, establishment and recruitment in southwest Alaska. Using tree-ring chronologies developed from a subset of the plots (n = 30), we estimated establishment dates and basal area increment (BAI) for trees of all age classes across a range of site conditions. We used GLMs (generalized linear models) to explore the relationship between tree growth and temperature in undisturbed, low elevation sites along the gradient, using BAI averaged over the years 1975-2000. In addition, we examined the relationship between growing degree days (GDD) and seedling establishment over the previous three decades. We used total counts of live seedlings, saplings and live and dead trees, representing four cohorts, to evaluate whether geospatial, climate, and measured plot covariates predicted abundance of the different size classes. We hypothesized that the relationship between abundance and longitude would vary by size class, and that this relationship would be mediated by growing season temperature. We found that mean BAI for trees in undisturbed, low elevation sites increased with July maximum temperature, and that the slope of the relationship with temperature changed with longitude (interaction significant with 90% confidence). White spruce establishment was positively associated with longer summers and/or greater heat accumulation, as inferred from GDD. Seedling, sapling and tree abundance were also positively correlated with temperature across the study area. The response to longitude was mixed, with smaller size classes

  19. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  20. BCQ+: a body constitution questionnaire to assess Yang-Xu. Part I: establishment of a first final version through a Delphi process.

    Science.gov (United States)

    Su, Yi-Chang; Chen, Li-Li; Lin, Jun-Dai; Lin, Jui-Shan; Huang, Yi-Chia; Lai, Jim-Shoung

    2008-12-01

    Assessing an individual's level of Yang deficiency (Yang-Xu) by its manifestations is a frequent issue in traditional Chinese medicine (TCM) clinical trials. To this end, an objective, reliable and rigorous diagnostic tool is required. This study aimed to develop a first final version of the Yang-Xu Constitution Questionnaire. We conducted 3 steps to develop such an objective measurement tool: 1) the research team was formed and a panel of 26 experts was selected for the Delphi process; 2) items for the questionnaire were generated by literature review and a Delphi process; items were reworded into colloquial questions; face and content validity of the items were evaluated through a Delphi process again; 3) the difficulty of the questionnaire was evaluated in a pilot study with 81 subjects aged 20-60 years. The literature review retrieved 35 relevant items which matched the definition of 'constitution' and 'Yang-Xu'. After a first Delphi process, 22 items were retained and translated into colloquial questions. According to the second part of the Delphi process, the content validity index of each of the 22 questions ranged between 0.85-1. These 22 questions were evaluated by 81 subjects, 2 questions that were hard to tell the difference were combined; 3 questions were modified after the research team had discussed the participants' feedback. Finally, the questionnaire was established with 21 questions. This first final version of a questionnaire to assess Yang-Xu constitution with considerable face and content validity may serve as a basis to develop an advanced Yang-Xu questionnaire. 2008 S. Karger AG, Basel.

  1. NEW APPROACH IN THE ATTRIBUTION OF PROFITS TO PERMANENT ESTABLISHMENTS – THE AUTHORISED OECD APPROACH AND ITS IMPLICATIONS FOR CROATIAN TAX LAW

    Directory of Open Access Journals (Sweden)

    Nevia Čičin-Šain

    2016-01-01

    Full Text Available The Organization for Economic Cooperation and Development (Organisation for Economic Cooperation and Development, hereinafter: OECD, is the leading organization in the field of taxation. It is the successor to the League of Nations’ work in the field of taxation. This organization, in its long history of work has been trying to solve the problem of attribution of profits of companies that do business across borders, without founding an individual company to carry out such work. This topic has been preoccupying the international tax community for many years. Recently, an entirely different approach in attributing the profits to the so-called permanent establishment has been adopted. Since this topic has never been discussed in the national academic literature, the author considered it important to handle this issue and see in which way the domestic law determines the profits of a permanent establishment and what are the implications of the work of the OECD on national legislation. This article consists of several chapters. The first one discusses the concept of a permanent establishment. The second part is devoted to a historical review of the methods for determining the profits of a permanent establishment. In the third part different theoretical models for determining the profits of a permanent establishment are being discussed. The fourth part is devoted to the new approach called the Authorised OECD Approach (AOA. The fifth part is devoted to domestic law and the implications of the work of the OECD on it. Finally, the author presents certain conclusions.

  2. A depth semi-averaged model for coastal dynamics

    Science.gov (United States)

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  3. Establishment, maintenance, and re-establishment of the safe and efficient steady-following state

    International Nuclear Information System (INIS)

    Pan Deng; Zheng Ying-Ping

    2015-01-01

    We present an integrated mathematical model of vehicle-following control for the establishment, maintenance, and re-establishment of the previous or new safe and efficient steady-following state. The hyperbolic functions are introduced to establish the corresponding mathematical models, which can describe the behavioral adjustment of the following vehicle steered by a well-experienced driver under complex vehicle following situations. According to the proposed mathematical models, the control laws of the following vehicle adjusting its own behavior can be calculated for its moving in safety, efficiency, and smoothness (comfort). Simulation results show that the safe and efficient steady-following state can be well established, maintained, and re-established by its own smooth (comfortable) behavioral adjustment with the synchronous control of the following vehicle’s velocity, acceleration, and the actual following distance. (paper)

  4. The average covering tree value for directed graph games

    NARCIS (Netherlands)

    Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf

    We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering

  5. The Average Covering Tree Value for Directed Graph Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.

    2012-01-01

    Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all

  6. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...

  7. Average multiplications in deep inelastic processes and their interpretation

    International Nuclear Information System (INIS)

    Kiselev, A.V.; Petrov, V.A.

    1983-01-01

    Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity

  8. CERAMIC PROPERTIES OF PUGU KAOLIN CLAYS. PART 2 ...

    African Journals Online (AJOL)

    a

    PART 2: EFFECT OF PHASE COMPOSITION ON FLEXURAL STRENGTH ... working in this field have established factors controlling the various ... The raw materials selected were kaolin clays from Pugu deposit in Tanzania, Norfloat potash .... the total mullite contents present in the samples since the method used does.

  9. 12 CFR 702.105 - Weighted-average life of investments.

    Science.gov (United States)

    2010-01-01

    ... investment funds. (1) For investments in registered investment companies (e.g., mutual funds) and collective investment funds, the weighted-average life is defined as the maximum weighted-average life disclosed, directly or indirectly, in the prospectus or trust instrument; (2) For investments in money market funds...

  10. Establishment of the Neutron Beam Research Facility at the OPAL Reactor

    International Nuclear Information System (INIS)

    Kennedy, S.J.; Robinson, R.A.

    2012-01-01

    Full text: Australia's first research reactor, HIFAR, reached criticality in January 1958. At that time Australia's main agenda was establishment of a nuclear power program. HIFAR operated for nearly 50 years, providing a firm foundation for the establishment of Australia's second generation research Reactor OPAL, which reached criticality in August 006. In HIFAR's early years a neutron beam facility was established for materials characterization, partly in aid of the nuclear energy agenda and partly in response to interest from Australia's scientific community. By the time Australia's nuclear energy program ceased (in the 1970s), radioisotope production and research had also been established at Lucas Heights. Also, by this time the neutron beam facility for scientific research had evolved into a major utilization programme, warranting establishment of an independent body to facilitate scientific access (the Australian Institute for Nuclear Science and Engineering). In HIFAR's lifetime, ANSTO established a radiopharmaceuticals service for the Australian medical community and NDT silicon production was also established and grew to maturity. So when time came to determine the strategy for nuclear research in Australia into the 21st century, it was clear that the replacement for HIFAR should be multipurpose, with major emphases on scientific applications of neutron beams and medical isotope production. With this strategy in mind, ANSTO set about to design and build OPAL with a world-class neutron beam facility, capable of supporting a large and diverse scientific research community. The establishment of the neutron beam facility became the mission of the Bragg Institute management team. This journey began in 1997 with establishment of a working budget, and reached its first major objective when OPAL reached 20 MW thermal power nearly one decade later (in 2006). The first neutron beam instruments began operation soon after (in 2007), and quickly proved themselves to be

  11. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  12. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  13. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    International Nuclear Information System (INIS)

    Craft, David; Papp, Dávid; Unkelbach, Jan

    2014-01-01

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step

  14. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT.

    Science.gov (United States)

    Craft, David; Papp, Dávid; Unkelbach, Jan

    2014-02-01

    To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.

  15. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Craft, David, E-mail: dcraft@partners.org; Papp, Dávid; Unkelbach, Jan [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)

    2014-02-15

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.

  16. Peak and averaged bicoherence for different EEG patterns during general anaesthesia

    Directory of Open Access Journals (Sweden)

    Myles Paul

    2010-11-01

    levels (comparable BIS estimates were 0.928(0.905-0.950 and 0.801(0.786-0.816. Estimates of linear regression and areas under ROC curves supported Pk findings. Bicoherence for eye movement artifacts were the most distinctive with respect to other EEG patterns (average |z| value 13.233. Conclusions This study quantified associations between deepening anaesthesia and increase in bicoherence for different frequency components and bicoherence estimates. Increase in bicoherence was also established for eye movement artifacts. While identified associations extend earlier findings of bicoherence changes with increases in anaesthetic drug concentration, results indicate that the unequal band bifrequency region, δ_θ, provides better predictive capabilities than equal band bifrequency regions.

  17. Establishment of Reference Doses for residues of allergenic foods: Report of the VITAL Expert Panel

    NARCIS (Netherlands)

    Taylor, S.L.; Baumert, J.L.; Kruizinga, A.G.; Remington, B.C.; Crevel, R.W.R.; Brooke-Taylor, S.; Allen, K.J.; Houben, G.

    2014-01-01

    In 2011, an expert panel was assembled to establish appropriate Reference Doses for allergenic food residues as a part of the VITAL (Voluntary Incidental Trace Allergen Labeling) program of The Allergen Bureau of Australia & New Zealand (ABA). These Reference Doses would guide advisory labeling

  18. Average contraction and synchronization of complex switched networks

    International Nuclear Information System (INIS)

    Wang Lei; Wang Qingguo

    2012-01-01

    This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)

  19. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  20. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  1. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-01-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  2. 47 CFR 1.959 - Computation of average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...

  3. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  4. Design and component specifications for high average power laser optical systems

    Energy Technology Data Exchange (ETDEWEB)

    O' Neil, R.W.; Sawicki, R.H.; Johnson, S.A.; Sweatt, W.C.

    1987-01-01

    Laser imaging and transport systems are considered in the regime where laser-induced damage and/or thermal distortion have significant design implications. System design and component specifications are discussed and quantified in terms of the net system transport efficiency and phase budget. Optical substrate materials, figure, surface roughness, coatings, and sizing are considered in the context of visible and near-ir optical systems that have been developed at Lawrence Livermore National Laboratory for laser isotope separation applications. In specific examples of general applicability, details of the bulk and/or surface absorption, peak and/or average power damage threshold, coating characteristics and function, substrate properties, or environmental factors will be shown to drive the component size, placement, and shape in high-power systems. To avoid overstressing commercial fabrication capabilities or component design specifications, procedures will be discussed for compensating for aberration buildup, using a few carefully placed adjustable mirrors. By coupling an aggressive measurements program on substrates and coatings to the design effort, an effective technique has been established to project high-power system performance realistically and, in the process, drive technology developments to improve performance or lower cost in large-scale laser optical systems. 13 refs.

  5. Design and component specifications for high average power laser optical systems

    International Nuclear Information System (INIS)

    O'Neil, R.W.; Sawicki, R.H.; Johnson, S.A.; Sweatt, W.C.

    1987-01-01

    Laser imaging and transport systems are considered in the regime where laser-induced damage and/or thermal distortion have significant design implications. System design and component specifications are discussed and quantified in terms of the net system transport efficiency and phase budget. Optical substrate materials, figure, surface roughness, coatings, and sizing are considered in the context of visible and near-ir optical systems that have been developed at Lawrence Livermore National Laboratory for laser isotope separation applications. In specific examples of general applicability, details of the bulk and/or surface absorption, peak and/or average power damage threshold, coating characteristics and function, substrate properties, or environmental factors will be shown to drive the component size, placement, and shape in high-power systems. To avoid overstressing commercial fabrication capabilities or component design specifications, procedures will be discussed for compensating for aberration buildup, using a few carefully placed adjustable mirrors. By coupling an aggressive measurements program on substrates and coatings to the design effort, an effective technique has been established to project high-power system performance realistically and, in the process, drive technology developments to improve performance or lower cost in large-scale laser optical systems. 13 refs

  6. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  7. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  8. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  9. Average stopping powers for electron and photon sources for radiobiological modeling and microdosimetric applications

    Science.gov (United States)

    Vassiliev, Oleg N.; Kry, Stephen F.; Grosshans, David R.; Mohan, Radhe

    2018-03-01

    This study concerns calculation of the average electronic stopping power for photon and electron sources. It addresses two problems that have not yet been fully resolved. The first is defining the electron spectrum used for averaging in a way that is most suitable for radiobiological modeling. We define it as the spectrum of electrons entering the sensitive to radiation volume (SV) within the cell nucleus, at the moment they enter the SV. For this spectrum we derive a formula that combines linearly the fluence spectrum and the source spectrum. The latter is the distribution of initial energies of electrons produced by a source. Previous studies used either the fluence or source spectra, but not both, thereby neglecting a part of the complete spectrum. Our derived formula reduces to these two prior methods in the case of high and low energy sources, respectively. The second problem is extending electron spectra to low energies. Previous studies used an energy cut-off on the order of 1 keV. However, as we show, even for high energy sources, such as 60Co, electrons with energies below 1 keV contribute about 30% to the dose. In this study all the spectra were calculated with Geant4-DNA code and a cut-off energy of only 11 eV. We present formulas for calculating frequency- and dose-average stopping powers, numerical results for several important electron and photon sources, and tables with all the data needed to use our formulas for arbitrary electron and photon sources producing electrons with initial energies up to  ∼1 MeV.

  10. Employer Health Benefit Costs and Demand for Part-Time Labor

    OpenAIRE

    Jennifer Feenstra Schultz; David Doorn

    2009-01-01

    The link between rising employer costs for health insurance benefits and demand for part-time workers is investigated using non-public data from the Medical Expenditure Panel Survey- Insurance Component (MEPS-IC). The MEPS-IC is a nationally representative, annual establishment survey from the Agency for Healthcare Research and Quality (AHRQ). Pooling the establishment level data from the MEPS-IC from 1996-2004 and matching with the Longitudinal Business Database and supplemental economic dat...

  11. Wave function collapse implies divergence of average displacement

    OpenAIRE

    Marchewka, A.; Schuss, Z.

    2005-01-01

    We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.

  12. On a Bayesian estimation procedure for determining the average ore grade of a uranium deposit

    International Nuclear Information System (INIS)

    Heising, C.D.; Zamora-Reyes, J.A.

    1996-01-01

    A Bayesian procedure is applied to estimate the average ore grade of a specific uranium deposit (the Morrison formation in New Mexico). Experimental data taken from drilling tests for this formation constitute deposit specific information, E 2 . This information is combined, through a single stage application of Bayes' theorem, with the more extensive and well established information on all similar formations in the region, E 1 . It is assumed that the best estimate for the deposit specific case should include the relevant experimental evidence collected from other like formations giving incomplete information on the specific deposit. This follows traditional methods for resource estimation, which presume that previous collective experience obtained from similar formations in the geological region can be used to infer the geologic characteristics of a less well characterized formation. (Author)

  13. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  14. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  15. Gauge-transformation properties of cosmological observables and its application to the light-cone average

    International Nuclear Information System (INIS)

    Yoo, Jaiyul; Durrer, Ruth

    2017-01-01

    Theoretical descriptions of observable quantities in cosmological perturbation theory should be independent of coordinate systems. This statement is often referred to as gauge-invariance of observable quantities, and the sanity of their theoretical description is verified by checking its gauge-invariance. We argue that cosmological observables are invariant scalars under diffeomorphisms and their theoretical description is gauge-invariant, only at linear order in perturbations. Beyond linear order, they are usually not gauge-invariant, and we provide the general law for the gauge-transformation that the perturbation part of an observable does obey. We apply this finding to derive the second-order expression for the observational light-cone average in cosmology and demonstrate that our expression is indeed invariant under diffeomorphisms.

  16. Gauge-transformation properties of cosmological observables and its application to the light-cone average

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Jaiyul [Center for Theoretical Astrophysics and Cosmology, Institute for Computational Science, University of Zürich, Winterthurerstrasse 190, CH-8057, Zürich (Switzerland); Durrer, Ruth, E-mail: jyoo@physik.uzh.ch, E-mail: ruth.durrer@unige.ch [Département de Physique Théorique and Center for Astroparticle Physics, Université de Genève, Quai E. Ansermet 24, CH-1211 Genève 4 (Switzerland)

    2017-09-01

    Theoretical descriptions of observable quantities in cosmological perturbation theory should be independent of coordinate systems. This statement is often referred to as gauge-invariance of observable quantities, and the sanity of their theoretical description is verified by checking its gauge-invariance. We argue that cosmological observables are invariant scalars under diffeomorphisms and their theoretical description is gauge-invariant, only at linear order in perturbations. Beyond linear order, they are usually not gauge-invariant, and we provide the general law for the gauge-transformation that the perturbation part of an observable does obey. We apply this finding to derive the second-order expression for the observational light-cone average in cosmology and demonstrate that our expression is indeed invariant under diffeomorphisms.

  17. A guide to GALAXY 5 part 1

    International Nuclear Information System (INIS)

    Price, J.A.

    1975-11-01

    GALAXY 6 Part 1 is a program, written in Fortran IV language, to compute spectrum-weighted, group-averaged nuclear cross-sections and group-to-group transfer cross-sections from the evaluated microscopic cross-section data contained in the UKAEA Nuclear Data Library. A description is given of the input and output schemes and operating procedures for the IBM 370/168, together with the structure and functioning of the program and other relevant topics. (author)

  18. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  19. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  20. Reduced lipid oxidation in myotubes established from obese and type 2 diabetic subjects

    DEFF Research Database (Denmark)

    Gaster, Michael

    2009-01-01

    To date, it is unknown whether reduced lipid oxidation of skeletal muscle of obese and obese type 2 diabetic (T2D) subjects partly is based on reduced oxidation of endogenous lipids. Palmitate (PA) accumulation, total oxidation and lipolysis were not different between myotubes established from lean...... both for endogenous and exogenous lipids. Thus myotubes established from obese and obese T2D subjects express a reduced complete oxidation of endogenous lipids. Two cardinal principles govern the reduced lipid oxidation in obese and diabetic myotubes; firstly, an impaired coupling between endogenous...... lipid and mitochondria in obese and obese diabetic myotubes and secondly, a mismatch between beta-oxidation and citric acid cycle in obese diabetic myotubes....

  1. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  2. Category structure determines the relative attractiveness of global versus local averages.

    Science.gov (United States)

    Vogel, Tobias; Carr, Evan W; Davis, Tyler; Winkielman, Piotr

    2018-02-01

    Stimuli that capture the central tendency of presented exemplars are often preferred-a phenomenon also known as the classic beauty-in-averageness effect . However, recent studies have shown that this effect can reverse under certain conditions. We propose that a key variable for such ugliness-in-averageness effects is the category structure of the presented exemplars. When exemplars cluster into multiple subcategories, the global average should no longer reflect the underlying stimulus distributions, and will thereby become unattractive. In contrast, the subcategory averages (i.e., local averages) should better reflect the stimulus distributions, and become more attractive. In 3 studies, we presented participants with dot patterns belonging to 2 different subcategories. Importantly, across studies, we also manipulated the distinctiveness of the subcategories. We found that participants preferred the local averages over the global average when they first learned to classify the patterns into 2 different subcategories in a contrastive categorization paradigm (Experiment 1). Moreover, participants still preferred local averages when first classifying patterns into a single category (Experiment 2) or when not classifying patterns at all during incidental learning (Experiment 3), as long as the subcategories were sufficiently distinct. Finally, as a proof-of-concept, we mapped our empirical results onto predictions generated by a well-known computational model of category learning (the Generalized Context Model [GCM]). Overall, our findings emphasize the key role of categorization for understanding the nature of preferences, including any effects that emerge from stimulus averaging. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Oppugning the assumptions of spatial averaging of segment and joint orientations.

    Science.gov (United States)

    Pierrynowski, Michael Raymond; Ball, Kevin Arthur

    2009-02-09

    Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.

  4. Establishing an integrated gastroenterology service between a medical center and the community.

    Science.gov (United States)

    Niv, Yaron; Dickman, Ram; Levi, Zohar; Neumann, Gadi; Ehrlich, Dorit; Bitterman, Haim; Dreiher, Jacob; Cohen, Arnon; Comaneshter, Doron; Halpern, Eyran

    2015-02-21

    To combine community and hospital services in order to enable improvements in patient management, an integrated gastroenterology service (IGS) was established. Referral patterns to specialist clinics were optimized; open access route for endoscopic procedures (including esophago-gastro-duodenoscopy, sigmoidoscopy and colonoscopy) was established; family physicians' knowledge and confidence were enhanced; direct communication lines between experts and primary care physicians were opened. Continuing education, guidelines and agreed instructions for referral were promoted by the IGS. Six quality indicators were developed by the Delphi method, rigorously designed and regularly monitored. Improvement was assessed by comparing 2010, 2011 and 2012 indicators. An integrated delivery system in a specific medical field may provide a solution to a fragmented healthcare system impaired by a lack of coordination. In this paper we describe a new integrated gastroenterology service established in April 2010. Waiting time for procedures decreased: 3 mo in April 30th 2010 to 3 wk in April 30th 2011 and stayed between 1-3 wk till December 30th 2012. Average cost for patient's visit decreased from 691 to 638 NIS (a decrease of 7.6%). Six health indicators were improved significantly comparing 2010 to 2012, 2.5% to 67.5%: Bone densitometry for patients with inflammatory bowel disease, preventive medications for high risk patients on aspirin/NSAIDs, colonoscopy following positive fecal occult blood test, gastroscopy in Barrett's esophagus, documentation of family history of colorectal cancer, and colonoscopy in patients with a family history of colorectal cancer. Establishment of an IGS was found to effectively improve quality of care, while being cost-effective.

  5. Two-part pricing structure in long-term gas sales contracts

    International Nuclear Information System (INIS)

    Slocum, J.C.; Lee, S.Y.

    1992-01-01

    Although the incremental electricity generation market has the potential to be a major growth area for natural gas demand in the U.S., it may never live up to such promise unless gas suppliers are more willing to enter into long-term gas sales agreements necessary to nurture this segment of the industry. The authors submit that producer reluctance to enter into such long-term sales agreements can be traced, at least in part to the differing contract price requirements between gas producers and buyers. This paper will address an evolving solution to this contracting dilemma - the development of a two-part pricing structure for the gas commodity. A two-part pricing structure includes a usage or throughput charge established in a way to yield a marginal gas cost competitive with electric utility avoided costs, and a reservation charge established to guarantee a minimum cash flow to the producer. Moreover, the combined effect of the two charges may yield total revenues that better reflect the producer's replacement cost of the reserves committed under the contract. 2 tabs

  6. Analytic computation of average energy of neutrons inducing fission

    International Nuclear Information System (INIS)

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  7. The role of sodium-poly(acrylates) with different weight-average molar mass in phosphate-free laundry detergent builder systems

    OpenAIRE

    Milojević, Vladimir S.; Ilić-Stojanović, Snežana; Nikolić, Ljubiša; Nikolić, Vesna; Stamenković, Jakov; Stojiljković, Dragan

    2013-01-01

    In this study, the synthesis of sodium-poly(acrylate) was performed by polymerization of acrylic acid in the water solution with three different contents of potassium-persulphate as an initiator. The obtained polymers were characterized by using HPLC and GPC analyses in order to define the purity and average molar mass of poly(acrylic acid). In order to investigate the influence of sodium-poly(acrylate) as a part of carbonate/zeolite detergent builder system, secondary washing characteristics...

  8. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Half-Watt average power femtosecond source spanning 3-8 µm based on subharmonic generation in GaAs

    Science.gov (United States)

    Smolski, Viktor; Vasilyev, Sergey; Moskalev, Igor; Mirov, Mike; Ru, Qitian; Muraviev, Andrey; Schunemann, Peter; Mirov, Sergey; Gapontsev, Valentin; Vodopyanov, Konstantin

    2018-06-01

    Frequency combs with a wide instantaneous spectral span covering the 3-20 µm molecular fingerprint region are highly desirable for broadband and high-resolution frequency comb spectroscopy, trace molecular detection, and remote sensing. We demonstrate a novel approach for generating high-average-power middle-infrared (MIR) output suitable for producing frequency combs with an instantaneous spectral coverage close to 1.5 octaves. Our method is based on utilizing a highly-efficient and compact Kerr-lens mode-locked Cr2+:ZnS laser operating at 2.35-µm central wavelength with 6-W average power, 77-fs pulse duration, and high 0.9-GHz repetition rate; to pump a degenerate (subharmonic) optical parametric oscillator (OPO) based on a quasi-phase-matched GaAs crystal. Such subharmonic OPO is a nearly ideal frequency converter capable of extending the benefits of frequency combs based on well-established mode-locked pump lasers to the MIR region through rigorous, phase- and frequency-locked down conversion. We report a 0.5-W output in the form of an ultra-broadband spectrum spanning 3-8 µm measured at 50-dB level.

  10. 77 FR 6971 - Establishment of User Fees for Filovirus Testing of Nonhuman Primate Liver Samples

    Science.gov (United States)

    2012-02-10

    ... Director on March 15, 1990, that they must comply with specific isolation and quarantine standards under 42... biosafety level 4 laboratory (BSL-4). A BSL-4 laboratory is also required during part of the testing... and supervisory costs; and the costs of enforcement, collection, research, establishment of standards...

  11. Establishment of a rice-duck integrated farming system and its effects on soil fertility and rice disease control

    Science.gov (United States)

    Teng, Qing; Hu, Xue-Feng; Cheng, Chang; Luo, Zhi-Qing; Luo, Fan

    2015-04-01

    Rice-duck integrated farming is an ecological farming system newly established in some areas of southern China . It was reported that the ducks walking around the paddy fields is beneficial to control weed hazards and reduce rice pests and diseases. To study and evaluate the effects of the rice-duck integrated farming on soil fertility and rice disease control, a field experiment of rice cultivation was carried out in the suburb of Shanghai in 2014. It includes a treatment of raising ducks in the fields and a control without ducks. The treatment was implemented by building a duck coop nearby the experimental fields and driving 15 ducks into a plot at daytime since the early stage of rice growth. Each plot is 667 m2 in area. The treatment and control were replicated for three times. No any herbicides, pesticides, fungicides and chemical fertilizers were applied during the experiment to prevent any disturbance to duck growing and rice weed hazards and disease incidences from agrochemicals. The results are as follows: (1) The incidences of rice leaf rollers (Cnaphalocrocis medinalis) and stem borers treated with ducks, 0.45%and 1.18% on average, respectively, are lower than those of the control, 0.74% and 1.44% on average, respectively. At the late stage of rice growth, the incidence of rice sheath blight treated with ducks, 13.15% on average, is significantly lower than that of the control, 16.9% on average; and the incidence of rice planthoppers treated with ducks, 11.3 per hill on average, is also significantly lower than that of the control, 47.4 per hill on average. (2) The number of weeds in the plots treated with ducks, 8.3 per m2 on average, is significantly lower than that of the control, 87.5 m2 on average. (3) Raising ducks in the fields could also enhance soil enzyme activity and nutrient status. At the late stage of rice growth, the activities of urease, phosphatase, sucrase and catalase in the soils treated with ducks are 1.39 times, 1.40 times, 1

  12. Self-consistent field theory of collisions: Orbital equations with asymptotic sources and self-averaged potentials

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, Y.K., E-mail: ykhahn22@verizon.net

    2014-12-15

    The self-consistent field theory of collisions is formulated, incorporating the unique dynamics generated by the self-averaged potentials. The bound state Hartree–Fock approach is extended for the first time to scattering states, by properly resolving the principal difficulties of non-integrable continuum orbitals and imposing complex asymptotic conditions. The recently developed asymptotic source theory provides the natural theoretical basis, as the asymptotic conditions are completely transferred to the source terms and the new scattering function is made fullyintegrable. The scattering solutions can then be directly expressed in terms of bound state HF configurations, establishing the relationship between the bound and scattering state solutions. Alternatively, the integrable spin orbitals are generated by constructing the individual orbital equations that contain asymptotic sources and self-averaged potentials. However, the orbital energies are not determined by the equations, and a special channel energy fixing procedure is developed to secure the solutions. It is also shown that the variational construction of the orbital equations has intrinsic ambiguities that are generally associated with the self-consistent approach. On the other hand, when a small subset of open channels is included in the source term, the solutions are only partiallyintegrable, but the individual open channels can then be treated more simply by properly selecting the orbital energies. The configuration mixing and channel coupling are then necessary to complete the solution. The new theory improves the earlier continuum HF model. - Highlights: • First extension of HF to scattering states, with proper asymptotic conditions. • Orbital equations with asymptotic sources and integrable orbital solutions. • Construction of self-averaged potentials, and orbital energy fixing. • Channel coupling and configuration mixing, involving the new orbitals. • Critical evaluation of the

  13. Asymptotic behaviour of time averages for non-ergodic Gaussian processes

    Science.gov (United States)

    Ślęzak, Jakub

    2017-08-01

    In this work, we study the behaviour of time-averages for stationary (non-ageing), but ergodicity-breaking Gaussian processes using their representation in Fourier space. We provide explicit formulae for various time-averaged quantities, such as mean square displacement, density, and analyse the behaviour of time-averaged characteristic function, which gives insight into rich memory structure of the studied processes. Moreover, we show applications of the ergodic criteria in Fourier space, determining the ergodicity of the generalised Langevin equation's solutions.

  14. Trade unions and the economic performance of brazilian establishments

    Directory of Open Access Journals (Sweden)

    Naércio Aquino Menezes-Filho

    2008-03-01

    Full Text Available This paper examines, for the first time in the literature, the impact of trade unions on various performance indicators of Brazilian establishments. A unionism retrospective survey was carried out among 1,000 establishments in the manufacturing sector and its results were matched to performance indicators available from the Brazilian Industrial Surveys between 1990 and 2000. The results using the pooled data indicate that the relationship between unionism and some performance indicators, such as average wages, employment and productivity is non-linear (concave, so that a rise in unionism from low levels is associated with higher performance, but at a decreasing rate. Unions also reduce profitability. Establishments that introduced profit-sharing schemes increased their productivity and profitability overall and paid higher wages in more unionized plants.Este artigo analisa, pela primeira vez na literatura, o impacto dos sindicatos de trabalhadores em vários indicadores de desempenho econômico de firmas industriais brasileiras. Realizou-se uma pesquisa retrospectiva sobre a densidade sindical de 1000 estabelecimentos industriais brasileiros e seus resultados foram combinados aos indicadores de desempenho econômico da Pesquisa Industrial Anual (PIA de 1990 a 2000. Os resultados indicam que a relação entre a densidade sindical na firma e seus salários, emprego e produtividade, é não-linear, ou seja, um aumento no grau de sindicalização leva a um melhor desempenho, porém a taxas decrescentes. Observou-se, também, uma relação negativa entre sindicalização e rentabilidade. Finalmente, estabelecimentos que introduziram mecanismos de 'participação nos lucros' aumentaram sua produtividade e rentabilidade no período e pagaram maiores salários nas firmas onde o grau de sindicalização era maior.

  15. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  16. Infrared metaphysics: the elusive ontology of radiator (part 1)

    NARCIS (Netherlands)

    Leonelli, S.; Chang, H.

    2005-01-01

    Hardly any ontological result of modern science is more firmly established than the fact that infrared radiation differs from light only in wavelength; this is part of the modern conception of the continuous spectrum of electromagnetic radiation reaching from radio waves to gamma radiation. Yet,

  17. Establishment of diagnostic reference levels for dental panoramic radiography in Greece

    International Nuclear Information System (INIS)

    Manousaridis, G.; Koukorava, C.; Hourdakis, C.J.; Kamenopoulou, V.; Yakoumakis, E.; Tsiklakis, K.

    2015-01-01

    The purpose of the present study was to present the national diagnostic reference levels (DRL) established for panoramic dental examinations in Greece. The establishment of DRL, as a tool for the optimisation of radiological procedures, is a requirement of national regulations. Measurements performed by the Greek Atomic Energy Commission on 90 panoramic systems have been used for the derivation of DRL values. DRL values have been proposed for exposure settings of different patient types (child, small adult and standard adult), both for film and digital imaging. The DRLs for different patient types are grouped in three categories: children, small adults (corresponding to female) and average adults (corresponding to male). Proposed DRLs for these groups are 2.2, 3.3 and 4.1 mGy, respectively. In order to investigate the correlation of DRLs with the available imaging modalities (CR, DR and film), this parameter was taken into account. DR imaging DRL is the lowest at 3.5 mGy, CR imaging the highest at 4.2 mGy and film imaging at 3.7 mGy. In order to facilitate comparison with other studies, kerma-width product values were calculated from K i , air and field size. (authors)

  18. Strips of hourly power options. Approximate hedging using average-based forward contracts

    International Nuclear Information System (INIS)

    Lindell, Andreas; Raab, Mikael

    2009-01-01

    We study approximate hedging strategies for a contingent claim consisting of a strip of independent hourly power options. The payoff of the contingent claim is a sum of the contributing hourly payoffs. As there is no forward market for specific hours, the fundamental problem is to find a reasonable hedge using exchange-traded forward contracts, e.g. average-based monthly contracts. The main result is a simple dynamic hedging strategy that reduces a significant part of the variance. The idea is to decompose the contingent claim into mathematically tractable components and to use empirical estimations to derive hedging deltas. Two benefits of the method are that the technique easily extends to more complex power derivatives and that only a few parameters need to be estimated. The hedging strategy based on the decomposition technique is compared with dynamic delta hedging strategies based on local minimum variance hedging, using a correlated traded asset. (author)

  19. Factors affecting establishment success of the endangered Caribbean cactus Harrisia portoricensis (Cactaceae).

    Science.gov (United States)

    Rojas-Sandoval, Julissa; Meléndez-Ackerman, Elvia

    2012-06-01

    Early plant stages may be the most vulnerable within the life cycle of plants especially in arid ecosystems. Interference from exotic species may exacerbate this condition. We evaluated germination, seedling survival and growth in the endangered Caribbean cactus Harrisia portoricensis, as a function of sunlight exposure (i.e., growing under open and shaded areas), different shade providers (i.e., growing under two native shrubs and one exotic grass species), two levels of predation (i.e., exclusion and non-exclusion) and variable microenvironmental conditions (i.e., temperature, PAR, humidity). Field experiments demonstrated that suitable conditions for germination and establishment of H. portoricensis seedling are optimal in shaded areas beneath the canopy of established species, but experiments also demonstrated that the identity of the shade provider can have a significant influence on the outcome of these processes. Harrisia portoricensis seedlings had higher probabilities of survival and grew better (i.e., larger diameters) when they were transplanted beneath the canopy of native shrubs, than beneath the exotic grass species, where temperature and solar radiation values were on average much higher than those obtained under the canopies of native shrubs. We also detected that exclusion from potential predators did not increase seedling survival. Our combined results for H. portoricensis suggested that the modification of microenvironmental conditions by the exotic grass may lower the probability of recruitment and establishment of this endangered cactus species.

  20. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  1. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  2. 40 CFR 63.3546 - How do I establish the emission capture system and add-on control device operating limits during...

    Science.gov (United States)

    2010-07-01

    ... system and add-on control device operating limits during the performance test? 63.3546 Section 63.3546... device or system of multiple capture devices. The average duct static pressure is the maximum operating... Add-on Controls Option § 63.3546 How do I establish the emission capture system and add-on control...

  3. Comparison of power pulses from homogeneous and time-average-equivalent models

    International Nuclear Information System (INIS)

    De, T.K.; Rouben, B.

    1995-01-01

    The time-average-equivalent model is an 'instantaneous' core model designed to reproduce the same three dimensional power distribution as that generated by a time-average model. However it has been found that the time-average-equivalent model gives a full-core static void reactivity about 8% smaller than the time-average or homogeneous models. To investigate the consequences of this difference in static void reactivity in time dependent calculations, simulations of the power pulse following a hypothetical large-loss-of-coolant accident were performed with a homogeneous model and compared with the power pulse from the time-average-equivalent model. The results show that there is a much smaller difference in peak dynamic reactivity than in static void reactivity between the two models. This is attributed to the fact that voiding is not complete, but also to the retardation effect of the delayed-neutron precursors on the dynamic flux shape. The difference in peak reactivity between the models is 0.06 milli-k. The power pulses are essentially the same in the two models, because the delayed-neutron fraction in the time-average-equivalent model is lower than in the homogeneous model, which compensates for the lower void reactivity in the time-average-equivalent model. (author). 1 ref., 5 tabs., 9 figs

  4. Limited by the host: Host age hampers establishment of holoparasite Cuscuta epithymum

    Science.gov (United States)

    Meulebrouck, Klaar; Verheyen, Kris; Brys, Rein; Hermy, Martin

    2009-07-01

    A good understanding of the relationship between plant establishment and the ecosystem of which they are part of is needed to conserve rare plant species. Introduction experiments offer a direct test of recruitment limitation, but generally only the seed germination and seedling phases are monitored. Thus the relative importance of different establishment stages in the process of recruitment is not considered. This is particularly true for parasitic plants where empirical data are generally missing. During two consecutive growing seasons we examined the effect of heathland management applications, degree of heathland succession (pioneer, building and mature phase) and seed-density on the recruitment and establishment of the endangered holoparasite Cuscuta epithymum. In general, recruitment after two growing seasons was low with 4.79% of the sown seeds that successfully emerged to the seedling stage and a final establishment of 89 flowering adults (i.e. <1.5% of the sown seeds). Although a higher seed-density resulted in a higher number of seedlings, seed-density did not significantly affected relative germination percentages. The management type and subsequent heath succession had no significant effect on seedling emergence; whereas, seedling attachment to the host, establishment and growth to full-grown size were hampered in older heath vegetation (i.e. high, dense, and mature canopy). Establishment was most successful in turf-cut pioneer heathland, characterised by a relatively open and low vegetation of young Calluna vulgaris. The age of C. vulgaris, C. epithymum's main host, proved to be the most limiting factor. These results emphasise the importance of site quality (i.e. successional phase of its host) on recruitment success of C. epithymum, which is directly affected by the management applied to the vegetation. Lack of any heathland management will thus seriously restrict establishment of the endangered parasite.

  5. The Average Temporal and Spectral Evolution of Gamma-Ray Bursts

    International Nuclear Information System (INIS)

    Fenimore, E.E.

    1999-01-01

    We have averaged bright BATSE bursts to uncover the average overall temporal and spectral evolution of gamma-ray bursts (GRBs). We align the temporal structure of each burst by setting its duration to a standard duration, which we call T left-angleDurright-angle . The observed average open-quotes aligned T left-angleDurright-angle close quotes profile for 32 bright bursts with intermediate durations (16 - 40 s) has a sharp rise (within the first 20% of T left-angleDurright-angle ) and then a linear decay. Exponentials and power laws do not fit this decay. In particular, the power law seen in the X-ray afterglow (∝T -1.4 ) is not observed during the bursts, implying that the X-ray afterglow is not just an extension of the average temporal evolution seen during the gamma-ray phase. The average burst spectrum has a low-energy slope of -1.03, a high-energy slope of -3.31, and a peak in the νF ν distribution at 390 keV. We determine the average spectral evolution. Remarkably, it is also a linear function, with the peak of the νF ν distribution given by ∼680-600(T/T left-angleDurright-angle ) keV. Since both the temporal profile and the peak energy are linear functions, on average, the peak energy is linearly proportional to the intensity. This behavior is inconsistent with the external shock model. The observed temporal and spectral evolution is also inconsistent with that expected from variations in just a Lorentz factor. Previously, trends have been reported for GRB evolution, but our results are quantitative relationships that models should attempt to explain. copyright copyright 1999. The American Astronomical Society

  6. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  7. Is average daily travel time expenditure constant? In search of explanations for an increase in average travel time.

    NARCIS (Netherlands)

    van Wee, B.; Rietveld, P.; Meurs, H.

    2006-01-01

    Recent research suggests that the average time spent travelling by the Dutch population has increased over the past decades. However, different data sources show different levels of increase. This paper explores possible causes for this increase. They include a rise in incomes, which has probably

  8. Yearly, seasonal and monthly daily average diffuse sky radiation models

    International Nuclear Information System (INIS)

    Kassem, A.S.; Mujahid, A.M.; Turner, D.W.

    1993-01-01

    A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs

  9. Parameterization of Time-Averaged Suspended Sediment Concentration in the Nearshore

    Directory of Open Access Journals (Sweden)

    Hyun-Doug Yoon

    2015-11-01

    Full Text Available To quantify the effect of wave breaking turbulence on sediment transport in the nearshore, the vertical distribution of time-averaged suspended sediment concentration (SSC in the surf zone was parameterized in terms of the turbulent kinetic energy (TKE at different cross-shore locations, including the bar crest, bar trough, and inner surf zone. Using data from a large-scale laboratory experiment, a simple relationship was developed between the time-averaged SSC and the time-averaged TKE. The vertical variation of the time-averaged SSC was fitted to an equation analogous to the turbulent dissipation rate term. At the bar crest, the proposed equation was slightly modified to incorporate the effect of near-bed sediment processes and yielded reasonable agreement. This parameterization yielded the best agreement at the bar trough, with a coefficient of determination R2 ≥ 0.72 above the bottom boundary layer. The time-averaged SSC in the inner surf zone showed good agreement near the bed but poor agreement near the water surface, suggesting that there is a different sedimentation mechanism that controls the SSC in the inner surf zone.

  10. 42 CFR 100.2 - Average cost of a health insurance policy.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Average cost of a health insurance policy. 100.2... VACCINE INJURY COMPENSATION § 100.2 Average cost of a health insurance policy. For purposes of determining..., less certain deductions. One of the deductions is the average cost of a health insurance policy, as...

  11. Annual average equivalent dose of workers form health area

    International Nuclear Information System (INIS)

    Daltro, T.F.L.; Campos, L.L.

    1992-01-01

    The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)

  12. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  13. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  14. Intensified Sampling in Response to a Salmonella Heidelberg Outbreak Associated with Multiple Establishments Within a Single Poultry Corporation.

    Science.gov (United States)

    Green, Alice; Defibaugh-Chavez, Stephanie; Douris, Aphrodite; Vetter, Danah; Atkinson, Richard; Kissler, Bonnie; Khroustalev, Allison; Robertson, Kis; Sharma, Yudhbir; Becker, Karen; Dessai, Uday; Antoine, Nisha; Allen, Latasha; Holt, Kristin; Gieraltowski, Laura; Wise, Matthew; Schwensohn, Colin

    2018-03-01

    On June 28, 2013, the Food Safety and Inspection Service (FSIS) was notified by the Centers for Disease Control and Prevention (CDC) of an investigation of a multistate cluster of illnesses of Salmonella enterica serovar Heidelberg. Since case-patients in the cluster reported consumption of a variety of chicken products, FSIS used a simple likelihood-based approach using traceback information to focus on intensified sampling efforts. This article describes the multiphased product sampling approach taken by FSIS when epidemiologic evidence implicated chicken products from multiple establishments operating under one corporation. The objectives of sampling were to (1) assess process control of chicken slaughter and further processing and (2) determine whether outbreak strains were present in products from these implicated establishments. As part of the sample collection process, data collected by FSIS personnel to characterize product included category (whole chicken and type of chicken parts), brand, organic or conventional product, injection with salt solutions or flavorings, and whether product was skinless or skin-on. From the period September 9, 2013, through October 31, 2014, 3164 samples were taken as part of this effort. Salmonella percent positive declined from 19.7% to 5.3% during this timeframe as a result of regulatory and company efforts. The results of intensified sampling for this outbreak investigation informed an FSIS regulatory response and corrective actions taken by the implicated establishments. The company noted that a multihurdle approach to reduce Salmonella in products was taken, including on-farm efforts such as environmental testing, depopulation of affected flocks, disinfection of affected houses, vaccination, and use of various interventions within the establishments over the course of several months.

  15. Assessing ballast treatment standards for effect on rate of establishment using a stochastic model of the green crab

    Directory of Open Access Journals (Sweden)

    Cynthia Cooper

    2012-03-01

    Full Text Available This paper describes a stochastic model used to characterize the probability/risk of NIS establishment from ships' ballast water discharges. Establishment is defined as the existence of a sufficient number of individuals of a species to provide for a sustained population of the organism. The inherent variability in population dynamics of organisms in their native or established environments is generally difficult to quantify. Muchqualitative information is known about organism life cycles and biotic and abiotic environmental pressures on the population, but generally little quantitative data exist to develop a mechanistic model of populations in such complex environments. Moreover, there is little quantitative data to characterize the stochastic fluctuations of population size over time even without accounting for systematic responses to biotic and abiotic pressures. This research applies an approach using life-stage density and fecundity measures reported in research to determine a stochastic model of an organism's population dynamics. The model is illustrated withdata from research studies on the green crab that span a range of habitats of the established organism and were collected over some years to represent a range of time-varying biotic and abiotic conditions that are expected to exist in many receiving environments. This model is applied to introductions of NIS at the IMO D-2 and the U.S. ballast water discharge standard levels designated as Phase Two in the United States Coast Guard'sNotice of Proposed Rulemaking. Under a representative range of ballast volumes discharged at U.S. ports, the average rate of establishment of green crabs for ballast waters treated to the IMO-D2 concentration standard (less than 10 organisms/m3 is predicted to be reduced to about a third the average rate from untreated ballast water discharge. The longevity of populations from the untreated ballast water discharges is expected to be reducedby about 90% by

  16. Establishing a hydrostratigraphic framework using palynology -- An example from the Savannah River Site, South Carolina, U.S.A.

    Energy Technology Data Exchange (ETDEWEB)

    Van Pelt, R.

    2000-01-27

    An essential part of the environmental assessments is the characterization of the subsurface hydrogeology. Hydrogeological characterization involves establishing the hydrologic and geologic conditions and incorporating this information into groundwater flow and contaminant transport models.

  17. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

  18. 47 CFR 64.1801 - Geographic rate averaging and rate integration.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Geographic rate averaging and rate integration. 64.1801 Section 64.1801 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) MISCELLANEOUS RULES RELATING TO COMMON CARRIERS Geographic Rate Averaging and...

  19. Event-by-event fluctuations of average transverse momentum in central Pb + Pb collisions at 158 GeV per nucleon

    CERN Document Server

    Appelshauser, H.; Bailey, S.J.; Barna, D.; Barnby, L.S.; Bartke, J.; Barton, R.A.; Betev, L.; Bialkowska, H.; Billmeier, A.; Blyth, C.O.; Bock, R.; Boimska, B.; Bormann, C.; Brady, F.P.; Brockmann, R.; Brun, R.; Buncic, P.; Caines, H.L.; Carr, L.D.; Cebra, D.A.; Cooper, G.E.; Cramer, J.G.; Cristinziani, M.; Csato, P.; Dunn, J.; Eckardt, V.; Eckhardt, F.; Ferguson, M.I.; Fischer, H.G.; Flierl, D.; Fodor, Z.; Foka, P.; Freund, P.; Friese, V.; Fuchs, M.; Gabler, F.; Gal, J.; Ganz, R.; Gazdzicki, M.; Geist, Walter M.; Gladysz, E.; Grebieszkow, J.; Gunther, J.; Harris, J.W.; Hegyi, S.; Henkel, T.; Hill, L.A.; Hummler, H.; Igo, G.; Irmscher, D.; Jacobs, P.; Jones, P.G.; Kadija, K.; Kolesnikov, V.I.; Kowalski, M.; Lasiuk, B.; Levai, P.; Malakhov, A.I.; Margetis, S.; Markert, C.; Melkumov, G.L.; Mock, A.; Molnar, J.; Nelson, John M.; Oldenburg, M.; Odyniec, G.; Palla, G.; Panagiotou, A.D.; Petridis, A.; Piper, A.; Porter, R.J.; Poskanzer, Arthur M.; Prindle, D.J.; Puhlhofer, F.; Reid, J.G.; Renfordt, R.; Retyk, W.; Ritter, H.G.; Rohrich, D.; Roland, C.; Roland, G.; Rudolph, H.; Rybicki, A.; Sammer, T.; Sandoval, A.; Sann, H.; Semenov, A.Yu.; Schafer, E.; Schmischke, D.; Schmitz, N.; Schonfelder, S.; Seyboth, P.; Seyerlein, J.; Sikler, F.; Skrzypczak, E.; Snellings, R.; Squier, G.T.A.; Stock, R.; Strobele, H.; Struck, Chr.; Susa, T.; Szentpetery, I.; Sziklai, J.; Toy, M.; Trainor, T.A.; Trentalange, S.; Ullrich, T.; Vassiliou, M.; Veres, G.; Vesztergombi, G.; Voloshin, S.; Vranic, D.; Wang, F.; Weerasundara, D.D.; Wenig, S.; Whitten, C.; Wienold, T.; Wood, L.; Xu, N.; Yates, T.A.; Zimanyi, J.; Zhu, X.Z.; Zybert, R.

    1999-01-01

    We present first data on event-by-event fluctuations in the average transverse momentum of charged particles produced in Pb+Pb collisions at the CERN SPS. This measurement provides previously unavailable information allowing sensitive tests of microscopic and thermodynamic collision models and to search for fluctuations expected to occur in the vicinity of the predicted QCD phase transition. We find that the observed variance of the event-by-event average transverse momentum is consistent with independent particle production modified by the known two-particle correlations due to quantum statistics and final state interactions and folded with the resolution of the NA49 apparatus. For two specific models of non-statistical fluctuations in transverse momentum limits are derived in terms of fluctuation amplitude. We show that a significant part of the parameter space for a model of isospin fluctuations predicted as a consequence of chiral symmetry restoration in

  20. The B-dot Earth Average Magnetic Field

    Science.gov (United States)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  1. Effects of fire and restoration seeding on establishment of squarrose knapweed (Centaurea virgata var. squarrosa)

    Science.gov (United States)

    Alison Whittaker; Scott L. Jensen

    2008-01-01

    Squarrose knapweed (Centaurea virgata var. squarrosa), herein referred to simply as knapweed, is a noxious weed that invades both disturbed and healthy sagebrush communities. Fire, grazing, mining, recreation, and farming have all played a large part in the establishment of knapweed in Tintic Valley, Utah. This study was designed to look at the...

  2. 18 CFR Appendix 1 to Part 301 - ASC Utility Filing Template

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false ASC Utility Filing Template 1 Appendix 1 to Part 301 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST...

  3. 18 CFR Table 1 to Part 301 - Functionalization and Escalation Codes

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Functionalization and Escalation Codes 1 Table 1 to Part 301 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST...

  4. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  5. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  6. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  7. Harmonic Analysis of a Nonstationary Series of Temperature Paleoreconstruction for the Central Part of Greenland

    Directory of Open Access Journals (Sweden)

    T.E. Danova

    2016-06-01

    Full Text Available The results of the investigations of a transformed series of reconstructed air temperature data for the central part of Greenland with an increment of 30 years have been presented. Stationarization of a ~ 50,000-years’ series of the reconstructed air temperature in the central part of Greenland according to ice core data has been performed using mathematical expectation. To obtain mathematical expectation estimation, the smoothing procedure by the methods of moving average and wavelet analysis has been carried out. Fourier’s transformation has been applied repeatedly to the stationarized series with changing the averaging time in the process of smoothing. Three averaging time values have been selected for the investigations: ~ 400–500 years, ~ 2,000 years, and ~ 4,000 years. Stationarization of the reconstructed temperature series with the help of wavelet transformation showed the best results when applying the averaging time of ~ 400 and ~ 2000 years, the trends well characterize the initial temperature series, there-by revealing the main patterns of its dynamics. Using the period with the averaging time of ~ 4,000 years showed the worst result: significant events of the main temperature series were lost in the process of averaging. The obtained results well correspond to cycling known to be inherent to the climatic system of the planet; the detected modes of 1,470 ± 500 years are comparable to the Dansgaard–Oeschger and Bond oscillations.

  8. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail

    2015-01-01

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees

  9. Deblurring of class-averaged images in single-particle electron microscopy

    International Nuclear Information System (INIS)

    Park, Wooram; Chirikjian, Gregory S; Madden, Dean R; Rockmore, Daniel N

    2010-01-01

    This paper proposes a method for the deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid-body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre–Fourier expansions, and Hermite expansion and Laguerre–Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method

  10. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  11. Establishment of Database System for Radiation Oncology

    International Nuclear Information System (INIS)

    Kim, Dae Sup; Lee, Chang Ju; Yoo, Soon Mi; Kim, Jong Min; Lee, Woo Seok; Kang, Tae Young; Back, Geum Mun; Hong, Dong Ki; Kwon, Kyung Tae

    2008-01-01

    To enlarge the efficiency of operation and establish a constituency for development of new radiotherapy treatment through database which is established by arranging and indexing radiotherapy related affairs in well organized manner to have easy access by the user. In this study, Access program provided by Microsoft (MS Office Access) was used to operate the data base. The data of radiation oncology was distinguished by a business logs and maintenance expenditure in addition to stock management of accessories with respect to affairs and machinery management. Data for education and research was distinguished by education material for department duties, user manual and related thesis depending upon its property. Registration of data was designed to have input form according to its subject and the information of data was designed to be inspected by making a report. Number of machine failure in addition to its respective repairing hours from machine maintenance expenditure in a period of January 2008 to April 2009 was analyzed with the result of initial system usage and one year after the usage. Radiation oncology database system was accomplished by distinguishing work related and research related criteria. The data are arranged and collected according to its subjects and classes, and can be accessed by searching the required data through referring the descriptions from each criteria. 32.3% of total average time was reduced on analyzing repairing hours by acquiring number of machine failure in addition to its type in a period of January 2008 to April 2009 through machine maintenance expenditure. On distinguishing and indexing present and past data upon its subjective criteria through the database system for radiation oncology, the use of information can be easily accessed to enlarge the efficiency of operation, and in further, can be a constituency for improvement of work process by acquiring various information required for new radiotherapy treatment in real time.

  12. 10 CFR Appendix II to Part 960 - NRC and EPA Requirements for Preclosure Repository Performance

    Science.gov (United States)

    2010-01-01

    ...) that, except for variances permitted for unusual operations under Section 191.04 as an upper limit..., and schedule. 10 CFR part 20 establishes (a) exposure limits for operating personnel and (b... necessary to ensure consistency with 10 CFR part 60. ...

  13. Startup activities of established Finnish companies

    OpenAIRE

    Saalasti, Sini

    2016-01-01

    Established companies have collaborated with startups for decades in order to enhance their capabilities in technology and innovation. However, in the recent years, the changes in the business environment have induced established companies to increase their collaboration with startups. Thus, startup activities of established companies have become a timely phenomenon. This study explores the startup activities of established companies by analyzing all the activity established companies conduct...

  14. The Mixed Waste Management Facility: Technology selection and implementation plan, Part 2, Support processes

    International Nuclear Information System (INIS)

    Streit, R.D.; Couture, S.A.

    1995-03-01

    The purpose of this document is to establish the foundation for the selection and implementation of technologies to be demonstrated in the Mixed Waste Management Facility, and to select the technologies for initial pilot-scale demonstration. Criteria are defined for judging demonstration technologies, and the framework for future technology selection is established. On the basis of these criteria, an initial suite of technologies was chosen, and the demonstration implementation scheme was developed. Part 1, previously released, addresses the selection of the primary processes. Part II addresses process support systems that are considered ''demonstration technologies.'' Other support technologies, e.g., facility off-gas, receiving and shipping, and water treatment, while part of the integrated demonstration, use best available commercial equipment and are not selected against the demonstration technology criteria

  15. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  16. An application of commercial data averaging techniques in pulsed photothermal experiments

    International Nuclear Information System (INIS)

    Grozescu, I.V.; Moksin, M.M.; Wahab, Z.A.; Yunus, W.M.M.

    1997-01-01

    We present an application of data averaging technique commonly implemented in many commercial digital oscilloscopes or waveform digitizers. The technique was used for transient data averaging in the pulsed photothermal radiometry experiments. Photothermal signals are surrounded by an important amount of noise which affect the precision of the measurements. The effect of the noise level on photothermal signal parameters in our particular case, fitted decay time, is shown. The results of the analysis can be used in choosing the most effective averaging technique and estimating the averaging parameter values. This would help to reduce the data acquisition time while improving the signal-to-noise ratio

  17. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  18. Establishment of ruminal enzyme activities and fermentation capacity in dairy calves from birth through weaning.

    Science.gov (United States)

    Rey, M; Enjalbert, F; Monteils, V

    2012-03-01

    The objectives of this study were to characterize the establishment of ruminal fermentation and enzymatic activities in dairy calves from birth to weaning (d 83). Six Holstein calves, immediately separated from their mother at birth, were fed colostrum for 3 d after birth, and thereafter milk replacer, starter pelleted concentrate, and hay until d 83 of age. Ruminal samples were collected from each calf every day for the first 10 d, and additionally at d 12, 15, 19, 22, 26, 29, 33, 36, 40, 43, 47, 50, 55, 62, 69, and 83. Ruminal samples were collected 1h after milk feeding with a stomach tube. The pH and redox potential (E(h)) were immediately measured. Samples were kept for further determination of ammonia nitrogen (NH(3)-N) and volatile fatty acid (VFA) concentrations, and xylanase, amylase, urease, and protease activities. Ruminal pH averaged 6.69, 5.82, and 6.34, from d 1 to 9, d 10 to 40, and d 43 to 83 of age, respectively. At first day of life, the ruminal E(h) value was positive (+224 mV). From d 2 to 9, d 10 to 40, and d 43 to 83 of age, ruminal E(h) averaged -164, -115, and -141 mV, respectively. From d 1 to 3, d 4 to 22, and d 26 to 83 of age, NH(3)-N concentration averaged 60.1, 179.8, and 58.2 mg/L, respectively. No VFA were detected in ruminal samples collected on d 1 of life of calves. From d 2 to 10 and d 12 to 83 of age, ruminal total VFA concentration averaged 19.5 and 84.4mM, respectively. Neither ruminal xylanase or amylase activities were observed at d 1 of age. From d 5 to 15 and d 19 to 83 of age, the xylanase activity averaged 182.2 and 62.4 μmol of sugar released per hour per gram of ruminal content dry matter (DM), respectively. From d 5 to 83 of age, the amylase activity reached 35.4 μmol of sugar released per hour per gram of ruminal content DM. The ruminal ureolytic activity was observed with an average value of 6.9 μg of NH(3)-N released per minute per gram of ruminal content DM over the 83-d experimental period. From d 1 to 4 and d

  19. Determination of radon in soil and water in parts of Accra, and generation of preliminary radon map for Ghana

    International Nuclear Information System (INIS)

    Osei, Peter

    2016-07-01

    The research was focused on determining the radon levels in soil and water in parts of Accra, generate a preliminary radon map for Ghana and estimate a pilot reference level for the country, using the data obtained from this research and collated data from other researchers. The radon gas measurement was done with the passive method, using the SSNTDs which are sensitive to alpha particles emitted by radon. Cellulose nitrate LR – 115 type II alpha particle detectors were used. The detectors were chemically etched in a 2.5 M NaOH solution at a temperature of 60 °C for 90 minutes, after two weeks and two months of exposure to soil and water respectively. The images of the etched detectors were acquired by means of a scanner and then tracks counted with ImageJ software. Inverse Distance Weighing (IDW) method of ArcGIS 10.2 was used to spatially distribute the radon concentration on a map. The average soil radon concentration in the study area ranges from 0.191 kBqm"-"3 to 3.416 kBqm"-"3 with a mean of 1.193 kBqm"-"3. The radon concentration in water from the study area ranges from 0.00346 BqL"-"1 to 0.00538 BqL"-"1 with an average of 0.00456 BqL"-"1. A strong negative correlation has been established between radon in soil and water in the study area. The preliminary national average indoor, water and soil radon concentrations are 137 Bqm"-"3, 361.93 Bqm"-"3 and 3716.74 Bqm"-"3 respectively. The average levels of water and indoor radon exceeded WHO’s reference level of 100 Bqm"-"3. Accordingly, the pilot national indoor radon reference level for Ghana is set as 200 Bqm"-"3. (au)

  20. Runoff and leaching of metolachlor from Mississippi River alluvial soil during seasons of average and below-average rainfall.

    Science.gov (United States)

    Southwick, Lloyd M; Appelboom, Timothy W; Fouss, James L

    2009-02-25

    The movement of the herbicide metolachlor [2-chloro-N-(2-ethyl-6-methylphenyl)-N-(2-methoxy-1-methylethyl)acetamide] via runoff and leaching from 0.21 ha plots planted to corn on Mississippi River alluvial soil (Commerce silt loam) was measured for a 6-year period, 1995-2000. The first three years received normal rainfall (30 year average); the second three years experienced reduced rainfall. The 4-month periods prior to application plus the following 4 months after application were characterized by 1039 +/- 148 mm of rainfall for 1995-1997 and by 674 +/- 108 mm for 1998-2000. During the normal rainfall years 216 +/- 150 mm of runoff occurred during the study seasons (4 months following herbicide application), accompanied by 76.9 +/- 38.9 mm of leachate. For the low-rainfall years these amounts were 16.2 +/- 18.2 mm of runoff (92% less than the normal years) and 45.1 +/- 25.5 mm of leachate (41% less than the normal seasons). Runoff of metolachlor during the normal-rainfall seasons was 4.5-6.1% of application, whereas leaching was 0.10-0.18%. For the below-normal periods, these losses were 0.07-0.37% of application in runoff and 0.22-0.27% in leachate. When averages over the three normal and the three less-than-normal seasons were taken, a 35% reduction in rainfall was characterized by a 97% reduction in runoff loss and a 71% increase in leachate loss of metolachlor on a percent of application basis. The data indicate an increase in preferential flow in the leaching movement of metolachlor from the surface soil layer during the reduced rainfall periods. Even with increased preferential flow through the soil during the below-average rainfall seasons, leachate loss (percent of application) of the herbicide remained below 0.3%. Compared to the average rainfall seasons of 1995-1997, the below-normal seasons of 1998-2000 were characterized by a 79% reduction in total runoff and leachate flow and by a 93% reduction in corresponding metolachlor movement via these routes

  1. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  2. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  3. The average action for scalar fields near phase transitions

    International Nuclear Information System (INIS)

    Wetterich, C.

    1991-08-01

    We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)

  4. Average geodesic distance of skeleton networks of Sierpinski tetrahedron

    Science.gov (United States)

    Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao

    2018-04-01

    The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.

  5. Determination of the in-core power and the average core temperature of low power research reactors using gamma dose rate measurements

    International Nuclear Information System (INIS)

    Osei Poku, L.

    2012-01-01

    Most reactors incorporate out-of-core neutron detectors to monitor the reactor power. An accurate relationship between the powers indicated by these detectors and actual core thermal power is required. This relationship is established by calibrating the thermal power. The most common method used in calibrating the thermal power of low power reactors is neutron activation technique. To enhance the principle of multiplicity and diversity of measuring the thermal neutron flux and/or power and temperature difference and/or average core temperature of low power research reactors, an alternative and complimentary method has been developed, in addition to the current method. Thermal neutron flux/Power and temperature difference/average core temperature were correlated with measured gamma dose rate. The thermal neutron flux and power predicted using gamma dose rate measurement were in good agreement with the calibrated/indicated thermal neutron fluxes and powers. The predicted data was also good agreement with thermal neutron fluxes and powers obtained using the activation technique. At an indicated power of 30 kW, the gamma dose rate measured predicted thermal neutron flux of (1* 10 12 ± 0.00255 * 10 12 ) n/cm 2 s and (0.987* 10 12 ± 0.00243 * 10 12 ) which corresponded to powers of (30.06 ± 0.075) kW and (29.6 ± 0.073) for both normal level of the pool water and 40 cm below normal levels respectively. At an indicated power of 15 kW, the gamma dose rate measured predicted thermal neutron flux of (5.07* 10 11 ± 0.025* 10 11 ) n/cm 2 s and (5.12 * 10 11 ±0.024* 10 11 ) n/cm 2 s which corresponded to power of (15.21 ± 0.075) kW and (15.36 ± 0.073) kW for both normal levels of the pool water and 40 cm below normal levels respectively. The power predicted by this work also compared well with power obtained from a three-dimensional neutronic analysis for GHARR-1 core. The predicted power also compares well with calculated power using a correlation equation obtained from

  6. Establishing student perceptions of an entrepreneur using word associations

    Directory of Open Access Journals (Sweden)

    Jasmine E. Goliath

    2014-05-01

    Research purpose: To identify the image or perceptions that students have of an entrepreneur. Motivation for study: By establishing the image or perceptions that students have of an entrepreneur, insights could be provided into the factors influencing them to become entrepreneurs or not. Research approach, design and method: A qualitative projective technique, namely continuous word association, was adopted. Convenience sampling was used and 163 students participated. The words generated were coded into categories by searching for themes and words of a similar nature. The total words generated, the frequencies of recurring words, the number of different types of words, first words recalled and the average number of words recalled were established. Main findings: The students participating in the study have a good understanding of the general nature of an entrepreneur and entrepreneurship; an entrepreneur is perceived as someone who is a creative and innovative risk-taker, who owns a business involved in the selling of goods and services. Practical/managerial implications: Future entrepreneurs need to be aware that, in addition to several innate attributes, successful entrepreneurs have learned skills and competencies. It is also important that educators of entrepreneurship create a realistic image of what it is like to be an entrepreneur, and that both positive and negative aspects are highlighted. Contribution/value-add: By identifying the image or perceptions of an entrepreneur held by students, the marketing of entrepreneurship as a desirable career choice can be enhanced.

  7. The influence of the plantation establishment method on the yield of marshmallow (Althaea officinalis L. flowers

    Directory of Open Access Journals (Sweden)

    Sylwia Andruszczak

    2012-12-01

    Full Text Available The field experiment with one- and two-year-old marshmallow plants was carried out in Zamość on brown soil of loess origin in 2002-2004. There were four methods of plantation establishment: 1 direct sowing in the field (control object; 2 direct sowing in the field with cover of polypropylene sheet; 3 by seedlings from a plastic house; 4 by seedlings produced in multi-cell propagation trays. It was found that, in the case of one-year-old plants, all the methods of plantation establishment significantly increased the yield and the number of marshmallow flowers, as compared to the control object, but the best results were obtained when the plants were propagated from seedlings produced in multi-cell trays. Taking into account the two-year-old plants, no significant impact of the plantation establishment method on flower yield was found. On average, total yields of flowers varied from 17.2 dt×ha-1 in the first year of vegetation to 27.8 dt×ha-1 in the case of the two-year-old plants.

  8. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  9. Candidates Profile in FUVEST Exams from 2004 to 2013: Private and Public School Distribution, FUVEST Average Performance and Chemical Equilibrium Tasks Performance

    Directory of Open Access Journals (Sweden)

    R.S.A.P. Oliveira

    2014-08-01

    Full Text Available INTRODUCTION. Chemical equilibrium is recognized as a topic of several misconceptions. Its origins must be tracked from previous scholarship. Its impact on biochemistry learning is not fully described. A possible bulk of data is the FUVEST exam. OBJECTIVES: Identify students’ errors profile on chemical equilibrium tasks using public data from FUVEST exam. MATERIAL AND METHODS: Data analysis from FUVEST were: i Private and Public school distribution in Elementary and Middle School, and High School candidates of Pharmacy-Biochemistry course and total USP careers until the last call for enrollment (2004-2013; ii Average performance in 1st and 2nd parts of FUVEST exam of Pharmacy-Biochemistry, Chemistry, Engineering, Biological Sciences, Languages and Medicine courses and total enrolled candidates until 1st call for enrollment (2008- 2013; iii Performance of candidates of Pharmacy-Biochemistry, Chemistry, Engineering, Biological Sciences, Languages and Medicine courses and total USP careers in chemical equilibrium issues from 1st part of FUVEST (2011-2013. RESULTS AND DISCUSSION: i 66.2% of candidates came from private Elementary-Middle School courses and 71.8%, came from High School courses; ii Average grade over the period for 1st and 2nd FUVEST parts are respectively (in 100 points: Pharmacy-Biochemistry 66.7 and 61.2, Chemistry 65.9 and 58.9, Engineering 75.9 and 71.9, Biological Sciences 65.6 and 54.6, Languages 49.9 and 43.3, Medicine 83.5 and 79.5, total enrolled candidates 51,5 and 48.9; iii Four chemical equilibrium issues were found during 2011-2013 and the analysis of multiplechoice percentage distribution over the courses showed that there was a similar performance of students among them, except for Engineering and Medicine with higher grades, but the same proportional distribution among choices. CONCLUSION: Approved students came majorly from private schools. There was a different average performance among courses and similar on

  10. Systematic approach to peak-to-average power ratio in OFDM

    Science.gov (United States)

    Schurgers, Curt

    2001-11-01

    OFDM multicarrier systems support high data rate wireless transmission using orthogonal frequency channels, and require no extensive equalization, yet offer excellent immunity against fading and inter-symbol interference. The major drawback of these systems is the large Peak-to-Average power Ratio (PAR) of the transmit signal, which renders a straightforward implementation very costly and inefficient. Existing approaches that attack this PAR issue are abundant, but no systematic framework or comparison between them exist to date. They sometimes even differ in the problem definition itself and consequently in the basic approach to follow. In this work, we provide a systematic approach that resolves this ambiguity and spans the existing PAR solutions. The basis of our framework is the observation that efficient system implementations require a reduced signal dynamic range. This range reduction can be modeled as a hard limiting, also referred to as clipping, where the extra distortion has to be considered as part of the total noise tradeoff. We illustrate that the different PAR solutions manipulate this tradeoff in alternative ways in order to improve the performance. Furthermore, we discuss and compare a broad range of such techniques and organize them into three classes: block coding, clip effect transformation and probabilistic.

  11. Are average and symmetric faces attractive to infants? Discrimination and looking preferences.

    Science.gov (United States)

    Rhodes, Gillian; Geddes, Keren; Jeffery, Linda; Dziurawiec, Suzanne; Clark, Alison

    2002-01-01

    Young infants prefer to look at faces that adults find attractive, suggesting a biological basis for some face preferences. However, the basis for infant preferences is not known. Adults find average and symmetric faces attractive. We examined whether 5-8-month-old infants discriminate between different levels of averageness and symmetry in faces, and whether they prefer to look at faces with higher levels of these traits. Each infant saw 24 pairs of female faces. Each pair consisted of two versions of the same face differing either in averageness (12 pairs) or symmetry (12 pairs). Data from the mothers confirmed that adults preferred the more average and more symmetric versions in each pair. The infants were sensitive to differences in both averageness and symmetry, but showed no looking preference for the more average or more symmetric versions. On the contrary, longest looks were significantly longer for the less average versions, and both longest looks and first looks were marginally longer for the less symmetric versions. Mean looking times were also longer for the less average and less symmetric versions, but those differences were not significant. We suggest that the infant looking behaviour may reflect a novelty preference rather than an aesthetic preference.

  12. Fission neutron spectrum averaged cross sections for threshold reactions on arsenic

    International Nuclear Information System (INIS)

    Dorval, E.L.; Arribere, M.A.; Kestelman, A.J.; Comision Nacional de Energia Atomica, Cuyo Nacional Univ., Bariloche; Ribeiro Guevara, S.; Cohen, I.M.; Ohaco, R.A.; Segovia, M.S.; Yunes, A.N.; Arrondo, M.; Comision Nacional de Energia Atomica, Buenos Aires

    2006-01-01

    We have measured the cross sections, averaged over a 235 U fission neutron spectrum, for the two high threshold reactions: 75 As(n,p) 75 mGe and 75 As(n,2n) 74 As. The measured averaged cross sections are 0.292±0.022 mb, referred to the 3.95±0.20 mb standard for the 27 Al(n,p) 27 Mg averaged cross section, and 0.371±0.032 mb referred to the 111±3 mb standard for the 58 Ni(n,p) 58m+g Co averaged cross section, respectively. The measured averaged cross sections were also evaluated semi-empirically by numerically integrating experimental differential cross section data extracted for both reactions from the current literature. The calculations were performed for four different representations of the thermal-neutron-induced 235 U fission neutron spectrum. The calculated cross sections, though depending on analytical representation of the flux, agree with the measured values within the estimated uncertainties. (author)

  13. 14 CFR Appendix A to Part 150 - Noise Exposure Maps

    Science.gov (United States)

    2010-01-01

    ... 150 under § 150.11. For purposes of this part, the tolerances allowed for general purpose, type 2... Y Y Y Y Recreational Outdoor sports arenas and spectator sports Y Y(5) Y(5) N N N Outdoor music... average-daily-basis, the number of aircraft, by type of aircraft, which utilize each flight track, in both...

  14. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  15. 40 CFR 63.652 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... emissions more than the reference control technology, but the combination of the pollution prevention... emissions average. This must include any Group 1 emission points to which the reference control technology... agrees has a higher nominal efficiency than the reference control technology. Information on the nominal...

  16. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  17. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Science.gov (United States)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  18. Average cross sections for the 252Cf neutron spectrum

    International Nuclear Information System (INIS)

    Dezso, Z.; Csikai, J.

    1977-01-01

    A number of average cross sections have been measured for 252 Cf neutrons in (n, γ), (n,p), (n,2n), (n,α) reactions by the activation method and for fission by fission chamber. Cross sections have been determined for 19 elements and 45 reactions. The (n,γ) cross section values lie in the interval from 0.3 to 200 mb. The data as a function of target neutron number increases up to about N=60 with minimum near to dosed shells. The values lie between 0.3 mb and 113 mb. These cross sections decrease significantly with increasing the threshold energy. The values are below 20 mb. The data do not exceed 10 mb. Average (n,p) cross sections as a function of the threshold energy and average fission cross sections as a function of Zsup(4/3)/A are shown. The results obtained are summarized in tables

  19. Perceived Average Orientation Reflects Effective Gist of the Surface.

    Science.gov (United States)

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  20. A survey on the establishment of nuclear data network

    International Nuclear Information System (INIS)

    Yang, M. H.; Kim, H. J.; Chang, J. H.

    1999-01-01

    In Korea, there is a steady increase in the use of nuclear data due to diversification and activation of nuclear R and D activities, but effort toward construction of domestic Nuclear Data Network (NDN) is laid just in the beginning stage. A questionnaire survey of nuclear data users'opinion on a scheme of NDN establishment was, therefore, conducted for promoting the efficient production, evaluation and utilization of nuclear data. The survey was carried out through internet and mail, and 233 users of nuclear data responded the questionnaire. The survey results showed that most of nuclear data users (89% of respondents) perceived the necessity of NDN. The 50% of respondents preferred a nuclear data users' study-group for the establishment of NDN, while 42% preferred an operation of NDN as part of research and scientific activities. The 86% of respondents answered that KAERI could be a proper organization for the establishment and operation of NDN center. The respondents also answered that major considerations taken into account in the establisment of NDN should be as follows; construction of database system of nuclear data (38%), information share among nuclear data users (36%) and in-depth research on nuclear data production and evaluation (25%) and so on. Finally, the survey results showed that major functions of NDN center should be (1) share of nuclear data information among users (80% of respondents), (2) an intergrated management of imported or acquisited nuclear data abroad (78%), (3) production and evaluation of nuclear data (73%), and (4) support of nuclear data untilization (67%). (author)

  1. Radiation damage in the mouse female germ cell: a two-part study

    International Nuclear Information System (INIS)

    Wuebbles, B.

    1978-07-01

    In 1977 the average annual airborne gross beta activity in Livermore Valley air samples was 1.2 x 10 -13 μCi/ml, nearly twice the average observed during 1976 (7.6 x 10 -14 μCi/ml). The increase was partly due to an atmosphere nuclear test by the Peoples Republic of China. Concentrations of various radionuclides ( 235 U, 238 U, 137 Cs, 238 Pu, 239 Pu, and tritium in samples of surface air, surface waters, groundwater, soils, and food are reported. Results of aerial radiological surveys are included

  2. Assessment of the Average Price and Ethanol Content of Alcoholic Beverages by Brand – United States, 2011

    Science.gov (United States)

    DiLoreto, Joanna T.; Siegel, Michael; Hinchey, Danielle; Valerio, Heather; Kinzel, Kathryn; Lee, Stephanie; Chen, Kelsey; Shoaff, Jessica Ruhlman; Kenney, Jessica; Jernigan, David H.; DeJong, William

    2011-01-01

    Background There are no existing data on alcoholic beverage prices and ethanol content at the level of alcohol brand. A comprehensive understanding of alcohol prices and ethanol content at the brand level is essential for the development of effective public policy to reduce alcohol use among underage youth. The purpose of this study was to comprehensively assess alcoholic beverage prices and ethanol content at the brand level. Methods Using online alcohol price data from 15 control states and 164 online alcohol stores, we estimated the average alcohol price and percentage alcohol by volume for 900 brands of alcohol, across 17 different alcoholic beverage types, in the United States in 2011. Results There is considerable variation in both brand-specific alcohol prices and ethanol content within most alcoholic beverage types. For many types of alcohol, the within-category variation between brands exceeds the variation in average price and ethanol content among the several alcoholic beverage types. Despite differences in average prices between alcoholic beverage types, in 12 of the 16 alcoholic beverage types, customers can purchase at least one brand of alcohol that is under one dollar per ounce of ethanol. Conclusions Relying on data or assumptions about alcohol prices and ethanol content at the level of alcoholic beverage type is insufficient for understanding and influencing youth drinking behavior. Surveillance of alcohol prices and ethanol content at the brand level should become a standard part of alcohol research. PMID:22316218

  3. Quantum systems related to root systems and radial parts of Laplace operators

    OpenAIRE

    Olshanetsky, M. A.; Perelomov, A. M.

    2002-01-01

    The relation between quantum systems associated to root systems and radial parts of Laplace operators on symmetric spaces is established. From this it follows the complete integrability of some quantum systems.

  4. Peak-summer East Asian rainfall predictability and prediction part II: extratropical East Asia

    Science.gov (United States)

    Yim, So-Young; Wang, Bin; Xing, Wen

    2016-07-01

    The part II of the present study focuses on northern East Asia (NEA: 26°N-50°N, 100°-140°E), exploring the source and limit of the predictability of the peak summer (July-August) rainfall. Prediction of NEA peak summer rainfall is extremely challenging because of the exposure of the NEA to midlatitude influence. By examining four coupled climate models' multi-model ensemble (MME) hindcast during 1979-2010, we found that the domain-averaged MME temporal correlation coefficient (TCC) skill is only 0.13. It is unclear whether the dynamical models' poor skills are due to limited predictability of the peak-summer NEA rainfall. In the present study we attempted to address this issue by applying predictable mode analysis method using 35-year observations (1979-2013). Four empirical orthogonal modes of variability and associated major potential sources of variability are identified: (a) an equatorial western Pacific (EWP)-NEA teleconnection driven by EWP sea surface temperature (SST) anomalies, (b) a western Pacific subtropical high and Indo-Pacific dipole SST feedback mode, (c) a central Pacific-El Nino-Southern Oscillation mode, and (d) a Eurasian wave train pattern. Physically meaningful predictors for each principal component (PC) were selected based on analysis of the lead-lag correlations with the persistent and tendency fields of SST and sea-level pressure from March to June. A suite of physical-empirical (P-E) models is established to predict the four leading PCs. The peak summer rainfall anomaly pattern is then objectively predicted by using the predicted PCs and the corresponding observed spatial patterns. A 35-year cross-validated hindcast over the NEA yields a domain-averaged TCC skill of 0.36, which is significantly higher than the MME dynamical hindcast (0.13). The estimated maximum potential attainable TCC skill averaged over the entire domain is around 0.61, suggesting that the current dynamical prediction models may have large rooms to improve

  5. Green Suppliers Performance Evaluation in Belt and Road Using Fuzzy Weighted Average with Social Media Information

    Directory of Open Access Journals (Sweden)

    Kuo-Ping Lin

    2017-12-01

    Full Text Available A decision model for selecting a suitable supplier is a key to reducing the environmental impact in green supply chain management for high-tech companies. Traditional fuzzy weight average (FWA adopts linguistic variable to determine weight by experts. However, the weights of FWA have not considered the public voice, meaning the viewpoints of consumers in green supply chain management. This paper focuses on developing a novel decision model for green supplier selection in the One Belt and One Road (OBOR initiative through a fuzzy weighted average approach with social media. The proposed decision model uses the membership grade of the criteria and sub-criteria and its relative weights, which consider the volume of social media, to establish an analysis matrix of green supplier selection. Then, the proposed fuzzy weighted average approach is considered as an aggregating tool to calculate a synthetic score for each green supplier in the Belt and Road initiative. The final score of the green supplier is ordered by a non-fuzzy performance value ranking method to help the consumer make a decision. A case of green supplier selection in the light-emitting diode (LED industry is used to demonstrate the proposed decision model. The findings demonstrate (1 the consumer’s main concerns are the “Quality” and “Green products” in LED industry, hence, the ranking of suitable supplier in FWA with social media information model obtained the difference result with tradition FWA; (2 OBOR in the LED industry is not fervently discussed in searches of Google and Twitter; and (3 the FWA with social media information could objectively analyze the green supplier selection because the novel model considers the viewpoints of the consumer.

  6. 39 CFR 3.10. - Establishment of rates and classes of competitive products not of general applicability.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Establishment of rates and classes of competitive... proposed changes in classes; and (2) Management analysis demonstrating compliance with the standards of 39... proceedings of the Governors, and any supporting documentation required by 39 CFR Part 3015, to be filed with...

  7. Facilitating age diversity in organizations ‐ part II: managing perceptions and interactions

    NARCIS (Netherlands)

    Annet de Lange; Jürgen Deller; Beatrice van der Heijden; Guido Hertel

    2013-01-01

    Purpose ‐ Due to demographic changes in most industrialized countries, the average age of working people is continuously increasing, and the workforce is becoming more age-diverse. This review, together with the earlier JMP Special Issue "Facilitating age diversity in organizations ‐ part I:

  8. Facilitating age diversity in organizations – part II: managing perceptions and interactions

    NARCIS (Netherlands)

    Hertel, Guido; van der Heijden, Beatrice; de Lange, Annet H.; Deller, Jürgen

    2013-01-01

    Purpose – Due to demographic changes in most industrialized countries, the average age of working people is continuously increasing, and the workforce is becoming more age-diverse. This review, together with the earlier JMP Special Issue “Facilitating age diversity in organizations – part I:

  9. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  10. Domain-averaged Fermi-hole Analysis for Solids

    Czech Academy of Sciences Publication Activity Database

    Baranov, A.; Ponec, Robert; Kohout, M.

    2012-01-01

    Roč. 137, č. 21 (2012), s. 214109 ISSN 0021-9606 R&D Projects: GA ČR GA203/09/0118 Institutional support: RVO:67985858 Keywords : bonding in solids * domain averaged fermi hole * natural orbitals Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012

  11. Comparison of blood loss between using non central part cutting knee prosthesis and distal central part cutting.

    Science.gov (United States)

    Malairungsakul, Anan

    2014-12-01

    Patients who undergo knee replacement surgery may need to receive a blood transfusion due to blood loss during the operation. Therefore it was important to improve the design of knee implant operative procedures in an attempt to reduce the rate of blood loss. The present study aimed to compare the blood loss between two types of knee replacement surgery. This is a retrospective study in which 78 patients received cemented knee replacements in Phayao Hospital between October 2010 and March 2012. There were two types of surgical procedure: 1) using an implant position covering the end of the femoral bone without cutting into the central part of the distal femoral, 2) using an implant position covering the end of the femoral bone cutting the central part of the distal femoral. Blood loss, blood transfusion, hemoglobin and hematocrit were recorded preoperatively, immediately postsurgery and 48 hours after surgery. Findings revealed that the knee replacement surgery using the implant position covering the end of the femoral bone without cutting the central part of the distal femoral significantly lowered the rate of blood loss when compared to using the implant position covering the end of the femoral bone with central cutting of the distal femor. The average blood loss during the operation without cutting at the central part of distal femoral was 49.50 ± 11.11 mL; whereas the operation cutting the central part of the distal femoral was 58.50 ± 11.69 mL. As regards blood loss, the knee replacement surgery using the implant position covering the end ofthefemoral bone without cutting the central part of distal femor was better than using the implant position covering the end of the femoral bone cutting at the central part of the distal femor.

  12. 19 CFR 10.310 - Election to average for motor vehicles.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Election to average for motor vehicles. 10.310... Free Trade Agreement § 10.310 Election to average for motor vehicles. (a) Election. In determining whether a motor vehicle is originating for purposes of the preferences under the Agreement or a Canadian...

  13. Maximising the local development potential of Nature Tourism accommodation establishments in South Africa

    Directory of Open Access Journals (Sweden)

    Jayne M Rogerson

    2014-01-01

    Full Text Available Within extant scholarship on tourism and local development one knowledge gap concerns the role of the accommodation sector as a base for tourism-led local development in rural areas and small towns. The focus is upon nature tourism accommodation establishments which cluster mainly in geographically marginal areas in South Africa where poverty levels are high and the imperative exists for new drivers of economic and social development. A national audit of nature tourism accommodation establishments confirms their potential critical relevance for local development planning in many parts of the country. Nevertheless, existing evidence points to limitations in local linkages through the food supply chain. A critical review is given of several constraints which impact upon tourism-agriculture linkages with policy conclusions for strengthening such linkages.

  14. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  15. Positivity of the spherically averaged atomic one-electron density

    DEFF Research Database (Denmark)

    Fournais, Søren; Hoffmann-Ostenhof, Maria; Hoffmann-Ostenhof, Thomas

    2008-01-01

    We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes.......We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes....

  16. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Directory of Open Access Journals (Sweden)

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  17. Non-self-averaging nucleation rate due to quenched disorder

    International Nuclear Information System (INIS)

    Sear, Richard P

    2012-01-01

    We study the nucleation of a new thermodynamic phase in the presence of quenched disorder. The quenched disorder is a generic model of both impurities and disordered porous media; both are known to have large effects on nucleation. We find that the nucleation rate is non-self-averaging. This is in a simple Ising model with clusters of quenched spins. We also show that non-self-averaging behaviour is straightforward to detect in experiments, and may be rather common. (fast track communication)

  18. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  19. Role of spatial averaging in multicellular gradient sensing.

    Science.gov (United States)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-05-20

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  20. An Experimental Study Related to Planning Abilities of Gifted and Average Students

    Directory of Open Access Journals (Sweden)

    Marilena Z. Leana-Taşcılar

    2016-02-01

    Full Text Available Gifted students differ from their average peers in psychological, social, emotional and cognitive development. One of these differences in the cognitive domain is related to executive functions. One of the most important executive functions is planning and organization ability. The aim of this study was to compare planning abilities of gifted students with those of their average peers and to test the effectiveness of a training program on planning abilities of gifted students and average students. First, students’ intelligence and planning abilities were measured and then assigned to either experimental or control group. The groups were matched by intelligence and planning ability (experimental: (13 gifted and 8 average; control: 14 gifted and 8 average. In total 182 students (79 gifted and 103 average participated in the study. Then, a training program was implemented in the experimental group to find out if it improved students’ planning ability. Results showed that boys had better planning abilities than girls did, and gifted students had better planning abilities than their average peers did. Significant results were obtained in favor of the experimental group in the posttest scores