WorldWideScience

Sample records for maximum cumulative ratio

  1. An Analysis of Cumulative Risks Indicated by Biomonitoring Data of Six Phthalates Using the Maximum Cumulative Ratio

    Science.gov (United States)

    The Maximum Cumulative Ratio (MCR) quantifies the degree to which a single component of a chemical mixture drives the cumulative risk of a receptor.1 This study used the MCR, the Hazard Index (HI) and Hazard Quotient (HQ) to evaluate co-exposures to six phthalates using biomonito...

  2. An analysis of cumulative risks based on biomonitoring data for six phthalates using the Maximum Cumulative Ratio

    Science.gov (United States)

    The Maximum Cumulative Ratio (MCR) quantifies the degree to which a single chemical drives the cumulative risk of an individual exposed to multiple chemicals. Phthalates are a class of chemicals with ubiquitous exposures in the general population that have the potential to cause ...

  3. Gravitational wave chirp search: no-signal cumulative distribution of the maximum likelihood detection statistic

    International Nuclear Information System (INIS)

    Croce, R P; Demma, Th; Longo, M; Marano, S; Matta, V; Pierro, V; Pinto, I M

    2003-01-01

    The cumulative distribution of the supremum of a set (bank) of correlators is investigated in the context of maximum likelihood detection of gravitational wave chirps from coalescing binaries with unknown parameters. Accurate (lower-bound) approximants are introduced based on a suitable generalization of previous results by Mohanty. Asymptotic properties (in the limit where the number of correlators goes to infinity) are highlighted. The validity of numerical simulations made on small-size banks is extended to banks of any size, via a Gaussian correlation inequality

  4. DFT based spatial multiplexing and maximum ratio transmission for mm-wawe large MIMO

    DEFF Research Database (Denmark)

    Phan-Huy, D.-T.; Tölli, A.; Rajatheva, N.

    2014-01-01

    -SM-MRT). When the DFT-SM scheme alone is used, the data streams are either mapped onto different angles of departures in the case of aligned linear arrays, or mapped onto different orbital angular momentums in the case of aligned circular arrays. Maximum ratio transmission pre-equalizes the channel......By using large point-to-point multiple input multiple output (MIMO), spatial multiplexing of a large number of data streams in wireless communications using millimeter-waves (mm-waves) can be achieved. However, according to the antenna spacing and transmitter-receiver distance, the MIMO channel...... is likely to be ill-conditioned. In such conditions, highly complex schemes such as the singular value decomposition (SVD) are necessary. In this paper, we propose a new low complexity system called discrete Fourier transform based spatial multiplexing (DFT-SM) with maximum ratio transmission (DFT...

  5. Determination of maximum negative Poisson's ratio for laminated fiber composites

    Energy Technology Data Exchange (ETDEWEB)

    Shokrieh, M.M.; Assadi, A. [Composites Research Laboratory, Mechanical Engineering Department, Center of Excellence in Experimental Solid Mechanics and Dynamics, Iran University of Science and Technology, Tehran 16846-13114 (Iran, Islamic Republic of)

    2011-05-15

    Contrary to isotropic materials, composites always show complicated mechanical behavior under external loadings. In this article, an efficient algorithm is employed to obtain the maximum negative Poisson's ratio for laminated composite plates. We try to simplify the problem based on normalization of parameters and some manufacturing constraints to overlook the additional constraint of the optimization procedure. A genetic algorithm is used to find the optimal thickness of each lamina with a specified fiber direction. It is observed that the laminated composite with the configuration of (15/60/15) has the maximum negative Poisson's ratio. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  6. Tip Speed Ratio Based Maximum Power Tracking Control of Variable Speed Wind Turbines; A Comprehensive Design

    Directory of Open Access Journals (Sweden)

    Murat Karabacak

    2017-08-01

    Full Text Available The most primitive control method of wind turbines used to generate electric energy from wind is the fixed speed control method. With this method, it is not possible that turbine input power is transferred to grid at maximum rate. For this reason, Maximum Power Tracking (MPT schemes are proposed. In order to implement MPT, the propeller has to rotate at a different speed for every different wind speed. This situation has led MPT based systems to be called Variable Speed Wind Turbine (VSWT systems. In VSWT systems, turbine input power can be transferred to grid at rates close to maximum power. When MPT based control of VSWT systems is the case, two important processes come into prominence. These are instantaneously determination and tracking of MPT point. In this study, using a Maximum Power Point Tracking (MPPT method based on tip speed ratio, power available in wind is transferred into grid over a back to back converter at maximum rate via a VSWT system with permanent magnet synchronous generator (PMSG. Besides a physical wind turbine simulator is modelled and simulated. Results show that a time varying MPPT point is tracked with a high performance.

  7. Dopant density from maximum-minimum capacitance ratio of implanted MOS structures

    International Nuclear Information System (INIS)

    Brews, J.R.

    1982-01-01

    For uniformly doped structures, the ratio of the maximum to the minimum high frequency capacitance determines the dopant ion density per unit volume. Here it is shown that for implanted structures this 'max-min' dopant density estimate depends upon the dose and depth of the implant through the first moment of the depleted portion of the implant. A a result, the 'max-min' estimate of dopant ion density reflects neither the surface dopant density nor the average of the dopant density over the depletion layer. In particular, it is not clear how this dopant ion density estimate is related to the flatband capacitance. (author)

  8. Laboratory test on maximum and minimum void ratio of tropical sand matrix soils

    Science.gov (United States)

    Othman, B. A.; Marto, A.

    2018-04-01

    Sand is generally known as loose granular material which has a grain size finer than gravel and coarser than silt and can be very angular to well-rounded in shape. The present of various amount of fines which also influence the loosest and densest state of sand in natural condition have been well known to contribute to the deformation and loss of shear strength of soil. This paper presents the effect of various range of fines content on minimum void ratio e min and maximum void ratio e max of sand matrix soils. Laboratory tests to determine e min and e max of sand matrix soil were conducted using non-standard method introduced by previous researcher. Clean sand was obtained from natural mining site at Johor, Malaysia. A set of 3 different sizes of sand (fine sand, medium sand, and coarse sand) were mixed with 0% to 40% by weight of low plasticity fine (kaolin). Results showed that generally e min and e max decreased with the increase of fines content up to a minimal value of 0% to 30%, and then increased back thereafter.

  9. Maximum mass ratio of AM CVn-type binary systems and maximum white dwarf mass in ultra-compact X-ray binaries

    Directory of Open Access Journals (Sweden)

    Arbutina Bojan

    2011-01-01

    Full Text Available AM CVn-type stars and ultra-compact X-ray binaries are extremely interesting semi-detached close binary systems in which the Roche lobe filling component is a white dwarf transferring mass to another white dwarf, neutron star or a black hole. Earlier theoretical considerations show that there is a maximum mass ratio of AM CVn-type binary systems (qmax ≈ 2/3 below which the mass transfer is stable. In this paper we derive slightly different value for qmax and more interestingly, by applying the same procedure, we find the maximum expected white dwarf mass in ultra-compact X-ray binaries.

  10. Body Fineness Ratio as a Predictor of Maximum Prolonged-Swimming Speed in Coral Reef Fishes

    Science.gov (United States)

    Walker, Jeffrey A.; Alfaro, Michael E.; Noble, Mae M.; Fulton, Christopher J.

    2013-01-01

    The ability to sustain high swimming speeds is believed to be an important factor affecting resource acquisition in fishes. While we have gained insights into how fin morphology and motion influences swimming performance in coral reef fishes, the role of other traits, such as body shape, remains poorly understood. We explore the ability of two mechanistic models of the causal relationship between body fineness ratio and endurance swimming-performance to predict maximum prolonged-swimming speed (Umax) among 84 fish species from the Great Barrier Reef, Australia. A drag model, based on semi-empirical data on the drag of rigid, submerged bodies of revolution, was applied to species that employ pectoral-fin propulsion with a rigid body at U max. An alternative model, based on the results of computer simulations of optimal shape in self-propelled undulating bodies, was applied to the species that swim by body-caudal-fin propulsion at Umax. For pectoral-fin swimmers, Umax increased with fineness, and the rate of increase decreased with fineness, as predicted by the drag model. While the mechanistic and statistical models of the relationship between fineness and Umax were very similar, the mechanistic (and statistical) model explained only a small fraction of the variance in Umax. For body-caudal-fin swimmers, we found a non-linear relationship between fineness and Umax, which was largely negative over most of the range of fineness. This pattern fails to support either predictions from the computational models or standard functional interpretations of body shape variation in fishes. Our results suggest that the widespread hypothesis that a more optimal fineness increases endurance-swimming performance via reduced drag should be limited to fishes that swim with rigid bodies. PMID:24204575

  11. Cumulative vibratory indices and the H/M ratio of the soleus H-reflex: a quantitative study in control and spastic subjects

    NARCIS (Netherlands)

    Ongerboer de Visser, B. W.; Bour, L. J.; Koelman, J. H.; Speelman, J. D.

    1989-01-01

    Suppression of the soleus (Sol) H-reflex recruitment curve by Achilles tendon vibration and the ratio of maximum Sol H-reflex (Hmax) to maximum M-response (H/M ratio) have been studied by means of computer processing on the basis of peak-to-peak (P-P) and area values in 46 controls and in 16 spastic

  12. Energy indicators for electricity production : comparing technologies and the nature of the indicators Energy Payback Ratio (EPR), Net Energy Ratio (NER) and Cumulative Energy Demand (CED). [Oestfoldforskning AS

    Energy Technology Data Exchange (ETDEWEB)

    Raadal, Hanne Lerche [Ostfold research, Fredrikstad (Norway); Modahl, Ingunn Saur [Ostfold research, Fredrikstad (Norway); Bakken, Tor Haakon [SINTEF Energy, Trondheim (Norway)

    2012-11-01

    CEDREN (Centre for Environmental Design of Renewable Energy) is founded by The Research Council of Norway and energy companies and is one of eight centres that were part of the scheme Centre for Environment-friendly Energy Research (FME) when the scheme was launched in 2009. The main objective of CEDREN is to develop and communicate design solutions for transforming renewable energy sources to the desired energy products, and at the same time address the environmental and societal challenges at local, regional, national and global levels. CEDREN's board initiated in 2011 a pilot project on the topics 'Energy Pay-back Ratio (EPR)', 'Ecosystem services' and 'multi-criteria analysis (MCA)' in order to investigate the possible use of these concepts/indices in the management of regulated river basins and as tools to benchmark strategies for the development of energy projects/resources. The energy indicator part (documented in this report) has aimed at reviewing the applicability of different energy efficiency indicators, as such, in the strategic management and development of energy resources, and to compare and benchmark technologies for production of electricity. The main findings from this pilot study is also reported in a policy memo (in Norwegian), that is available at www.cedren.no. The work carried out in this project will be continued in the succeeding research project EcoManage, which was granted by the Research Council of Norway's RENERGI programme in December 2011. Energy indicators: Several energy indicators for extraction and delivery of an energy product (e.g. transport fuel, heat, electricity etc.) exist today. The main objective of such indicators is to give information about the energy efficiency of the needed extraction and transforming processes throughout the value chain related to the delivered energy product. In this project the indicators Energy Payback Ratio (EPR), Net Energy Ration (NER) and Cumulative

  13. Maximum mutual information vector quantization of log-likelihood ratios for memory efficient HARQ implementations

    DEFF Research Database (Denmark)

    Danieli, Matteo; Forchhammer, Søren; Andersen, Jakob Dahl

    2010-01-01

    analysis leads to using maximum mutual information (MMI) as optimality criterion and in turn Kullback-Leibler (KL) divergence as distortion measure. Simulations run based on an LTE-like system have proven that VQ can be implemented in a computationally simple way at low rates of 2-3 bits per LLR value......Modern mobile telecommunication systems, such as 3GPP LTE, make use of Hybrid Automatic Repeat reQuest (HARQ) for efficient and reliable communication between base stations and mobile terminals. To this purpose, marginal posterior probabilities of the received bits are stored in the form of log...

  14. Statistical analysis of COMPTEL maximum likelihood-ratio distributions: evidence for a signal from previously undetected AGN

    International Nuclear Information System (INIS)

    Williams, O. R.; Bennett, K.; Much, R.; Schoenfelder, V.; Blom, J. J.; Ryan, J.

    1997-01-01

    The maximum likelihood-ratio method is frequently used in COMPTEL analysis to determine the significance of a point source at a given location. In this paper we do not consider whether the likelihood-ratio at a particular location indicates a detection, but rather whether distributions of likelihood-ratios derived from many locations depart from that expected for source free data. We have constructed distributions of likelihood-ratios by reading values from standard COMPTEL maximum-likelihood ratio maps at positions corresponding to the locations of different categories of AGN. Distributions derived from the locations of Seyfert galaxies are indistinguishable, according to a Kolmogorov-Smirnov test, from those obtained from ''random'' locations, but differ slightly from those obtained from the locations of flat spectrum radio loud quasars, OVVs, and BL Lac objects. This difference is not due to known COMPTEL sources, since regions near these sources are excluded from the analysis. We suggest that it might arise from a number of sources with fluxes below the COMPTEL detection threshold

  15. Cumulative Distributions and Flow Structure of Two-Passage Shear Coaxial Injector with Various Gas Injection Ratio

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Inchul; Kim, Dohun; Koo, Jaye [Korea Aerospace Univ., Goyang (Korea, Republic of)

    2013-07-15

    To verify the effect of inner- and outer-stage gas jets, a shear coaxial injector was designed to analyze the axial velocity profile and breakup phenomenon with an increase in the measurement distance. When the measurement position was increased to Z/d=100, the axial flow showed a fully developed shape due to the momentum transfer, aerodynamic drag effect, and viscous mixing. An inner gas injection, which induces a higher momentum flux ratio near the nozzle, produces the greater shear force on atomization than an outer gas injection. Inner- and Outer-stage gas injection do not affect the mixing between the inner and outer gas flow below Z/d=5. The experiment results showed that the main effect of liquid jet breakup was governed by the gas jet of an inner stage. As the nozzle exit of the outer-stage was located far from the liquid column, shear force and turbulence breaking up of the liquid jets do not fully affect the liquid column. In the case of an inner-stage gas injection momentum flux ratio within 0.84, with the increase in the outer gas momentum flux ratio, the Smd decreases. However, at an inner-stage gas jet momentum flux ratio over 1.38, the Smd shows the similar distribution.

  16. On the low SNR capacity of maximum ratio combining over rician fading channels with full channel state information

    KAUST Repository

    Benkhelifa, Fatma

    2013-04-01

    In this letter, we study the ergodic capacity of a maximum ratio combining (MRC) Rician fading channel with full channel state information (CSI) at the transmitter and at the receiver. We focus on the low Signal-to-Noise Ratio (SNR) regime and we show that the capacity scales as L ΩK+L SNRx log(1SNR), where Ω is the expected channel gain per branch, K is the Rician fading factor, and L is the number of diversity branches. We show that one-bit CSI feedback at the transmitter is enough to achieve this capacity using an on-off power control scheme. Our framework can be seen as a generalization of recently established results regarding the fading-channels capacity characterization in the low-SNR regime. © 2012 IEEE.

  17. On the low SNR capacity of maximum ratio combining over rician fading channels with full channel state information

    KAUST Repository

    Benkhelifa, Fatma; Rezki, Zouheir; Alouini, Mohamed-Slim

    2013-01-01

    In this letter, we study the ergodic capacity of a maximum ratio combining (MRC) Rician fading channel with full channel state information (CSI) at the transmitter and at the receiver. We focus on the low Signal-to-Noise Ratio (SNR) regime and we show that the capacity scales as L ΩK+L SNRx log(1SNR), where Ω is the expected channel gain per branch, K is the Rician fading factor, and L is the number of diversity branches. We show that one-bit CSI feedback at the transmitter is enough to achieve this capacity using an on-off power control scheme. Our framework can be seen as a generalization of recently established results regarding the fading-channels capacity characterization in the low-SNR regime. © 2012 IEEE.

  18. Compilation of minimum and maximum isotope ratios of selected elements in naturally occurring terrestrial materials and reagents

    Science.gov (United States)

    Coplen, T.B.; Hopple, J.A.; Böhlke, J.K.; Peiser, H.S.; Rieder, S.E.; Krouse, H.R.; Rosman, K.J.R.; Ding, T.; Vocke, R.D.; Revesz, K.M.; Lamberty, A.; Taylor, P.; De Bievre, P.

    2002-01-01

    Documented variations in the isotopic compositions of some chemical elements are responsible for expanded uncertainties in the standard atomic weights published by the Commission on Atomic Weights and Isotopic Abundances of the International Union of Pure and Applied Chemistry. This report summarizes reported variations in the isotopic compositions of 20 elements that are due to physical and chemical fractionation processes (not due to radioactive decay) and their effects on the standard atomic weight uncertainties. For 11 of those elements (hydrogen, lithium, boron, carbon, nitrogen, oxygen, silicon, sulfur, chlorine, copper, and selenium), standard atomic weight uncertainties have been assigned values that are substantially larger than analytical uncertainties because of common isotope abundance variations in materials of natural terrestrial origin. For 2 elements (chromium and thallium), recently reported isotope abundance variations potentially are large enough to result in future expansion of their atomic weight uncertainties. For 7 elements (magnesium, calcium, iron, zinc, molybdenum, palladium, and tellurium), documented isotope-abundance variations in materials of natural terrestrial origin are too small to have a significant effect on their standard atomic weight uncertainties. This compilation indicates the extent to which the atomic weight of an element in a given material may differ from the standard atomic weight of the element. For most elements given above, data are graphically illustrated by a diagram in which the materials are specified in the ordinate and the compositional ranges are plotted along the abscissa in scales of (1) atomic weight, (2) mole fraction of a selected isotope, and (3) delta value of a selected isotope ratio. There are no internationally distributed isotopic reference materials for the elements zinc, selenium, molybdenum, palladium, and tellurium. Preparation of such materials will help to make isotope ratio measurements among

  19. Wingtip Vortices and Free Shear Layer Interaction in the Vicinity of Maximum Lift to Drag Ratio Lift Condition

    Science.gov (United States)

    Memon, Muhammad Omar

    between the lift induced drag (wingtip vortices) and parasite drag (free shear layer) can have a significant impact. Particle Image Velocimetry (PIV) experiments were performed at a) a water tunnel at ILR Aachen, Germany, and b) at the University of Dayton Low Speed Wind Tunnel in the near wake of an AR 6 wing with a Clark-Y airfoil to investigate the characteristics of the wingtip vortex and free shear layer at angles of attack in the vicinity of maximum aerodynamic efficiency for the wing. The data was taken 1.5 and 3 chord lengths downstream of the wing at varying free-stream velocities. A unique exergy-based technique was introduced to quantify distinct changes in the wingtip vortex axial core flow. The existence of wingtip vortex axial core flow transformation from wake-like (velocity less-than the freestream) to jet-like (velocity greater-than the freestream) behavior in the vicinity of the maximum (L/D) angles was observed. The exergy-based technique was able to identify the change in the out of plane profile and corresponding changes in the L/D performance. The resulting velocity components in and around the free shear layer in the wing wake showed counter flow in the cross-flow plane presumably corresponding to behavior associated with the flow over the upper and lower surfaces of the wing. Even though the velocity magnitudes in the free shear layer in cross-flow plane are a small fraction of the freestream velocity ( 10%), significant directional flow was observed. An indication of the possibility of the transfer of momentum (from inboard to outboard of the wing) was identified through spanwise flow corresponding to the upper and lower surfaces through the free shear layer in the wake. A transition from minimal cross flow in the free shear layer to a well-established shear flow in the spanwise direction occurs in the vicinity of maximum lift-to-drag ratio (max L/D) angle of attack. A distinctive balance between the lift induced drag and parasite drag was

  20. Maximum mass ratio of am CVn-type binary systems and maximum white dwarf mass in ultra-compact x-ray binaries (addendum - Serb. Astron. J. No. 183 (2011, 63

    Directory of Open Access Journals (Sweden)

    Arbutina B.

    2012-01-01

    Full Text Available We recalculated the maximum white dwarf mass in ultra-compact X-ray binaries obtained in an earlier paper (Arbutina 2011, by taking the effects of super-Eddington accretion rate on the stability of mass transfer into account. It is found that, although the value formally remains the same (under the assumed approximations, for white dwarf masses M2 >~0.1MCh mass ratios are extremely low, implying that the result for Mmax is likely to have little if any practical relevance.

  1. Horton Ratios Link Self-Similarity with Maximum Entropy of Eco-Geomorphological Properties in Stream Networks

    Directory of Open Access Journals (Sweden)

    Bruce T. Milne

    2017-05-01

    Full Text Available Stream networks are branched structures wherein water and energy move between land and atmosphere, modulated by evapotranspiration and its interaction with the gravitational dissipation of potential energy as runoff. These actions vary among climates characterized by Budyko theory, yet have not been integrated with Horton scaling, the ubiquitous pattern of eco-hydrological variation among Strahler streams that populate river basins. From Budyko theory, we reveal optimum entropy coincident with high biodiversity. Basins on either side of optimum respond in opposite ways to precipitation, which we evaluated for the classic Hubbard Brook experiment in New Hampshire and for the Whitewater River basin in Kansas. We demonstrate that Horton ratios are equivalent to Lagrange multipliers used in the extremum function leading to Shannon information entropy being maximal, subject to constraints. Properties of stream networks vary with constraints and inter-annual variation in water balance that challenge vegetation to match expected resource supply throughout the network. The entropy-Horton framework informs questions of biodiversity, resilience to perturbations in water supply, changes in potential evapotranspiration, and land use changes that move ecosystems away from optimal entropy with concomitant loss of productivity and biodiversity.

  2. Validation of calculated tissue maximum ratio obtained from measured percentage depth dose (PPD) data for high energy photon beam ( 6 MV and 15 MV)

    International Nuclear Information System (INIS)

    Osei, J.E.

    2014-07-01

    During external beam radiotherapy treatments, high doses are delivered to the cancerous cell. Accuracy and precision of dose delivery are primary requirements for effective and efficiency in treatment. This leads to the consideration of treatment parameters such as percentage depth dose (PDD), tissue air ratio (TAR) and tissue phantom ratio (TPR), which show the dose distribution in the patient. Nevertheless, tissue air ratio (TAR) for treatment time calculation, calls for the need to measure in-air-dose rate. For lower energies, measurement is not a problem but for higher energies, in-air measurement is not attainable due to the large build-up material required for the measurement. Tissue maximum ratio (TMR) is the quantity required to replace tissue air ratio (TAR) for high energy photon beam. It is known that tissue maximum ratio (TMR) is an important dosimetric function in radiotherapy treatment. As the calculation methods used to determine tissue maximum ratio (TMR) from percentage depth dose (PDD) were derived by considering the differences between TMR and PDD such as geometry and field size, where phantom scatter or peak scatter factors are used to correct dosimetric variation due to field size difference. The purpose of this study is to examine the accuracy of calculated tissue maximum ratio (TMR) data with measured TMR values for 6 MV and 15 MV photon beam at Sweden Ghana Medical Centre. With the help of the Blue motorize water phantom and the Omni pro-Accept software, Pdd values from which TMRs are calculated were measured at 100 cm source-to-surface distance (SSD) for various square field sizes from 5x5 cm to 40x40 cm and depth of 1.5 cm to 25 cm for 6 MV and 15 MV x-ray beam. With the same field sizes, depths and energies, the TMR values were measured. The validity of the calculated data was determined by making a comparison with values measured experimentally at some selected field sizes and depths. The results show that; the reference depth of maximum

  3. Completing the Remedial Sequence and College-Level Credit-Bearing Math: Comparing Binary, Cumulative, and Continuation Ratio Logistic Regression Models

    Science.gov (United States)

    Davidson, J. Cody

    2016-01-01

    Mathematics is the most common subject area of remedial need and the majority of remedial math students never pass a college-level credit-bearing math class. The majorities of studies that investigate this phenomenon are conducted at community colleges and use some type of regression model; however, none have used a continuation ratio model. The…

  4. Improving efficiency of two-type maximum power point tracking methods of tip-speed ratio and optimum torque in wind turbine system using a quantum neural network

    International Nuclear Information System (INIS)

    Ganjefar, Soheil; Ghassemi, Ali Akbar; Ahmadi, Mohamad Mehdi

    2014-01-01

    In this paper, a quantum neural network (QNN) is used as controller in the adaptive control structures to improve efficiency of the maximum power point tracking (MPPT) methods in the wind turbine system. For this purpose, direct and indirect adaptive control structures equipped with QNN are used in tip-speed ratio (TSR) and optimum torque (OT) MPPT methods. The proposed control schemes are evaluated through a battery-charging windmill system equipped with PMSG (permanent magnet synchronous generator) at a random wind speed to demonstrate transcendence of their effectiveness as compared to PID controller and conventional neural network controller (CNNC). - Highlights: • Using a new control method to harvest the maximum power from wind energy system. • Using an adaptive control scheme based on quantum neural network (QNN). • Improving of MPPT-TSR method by direct adaptive control scheme based on QNN. • Improving of MPPT-OT method by indirect adaptive control scheme based on QNN. • Using a windmill system based on PMSG to evaluate proposed control schemes

  5. Regional Inversion of the Maximum Carboxylation Rate (Vcmax) through the Sunlit Light Use Efficiency Estimated Using the Corrected Photochemical Reflectance Ratio Derived from MODIS Data

    Science.gov (United States)

    Zheng, T.; Chen, J. M.

    2016-12-01

    The maximum carboxylation rate (Vcmax), despite its importance in terrestrial carbon cycle modelling, remains challenging to obtain for large scales. In this study, an attempt has been made to invert the Vcmax using the gross primary productivity from sunlit leaves (GPPsun) with the physiological basis that the photosynthesis rate for leaves exposed to high solar radiation is mainly determined by the Vcmax. Since the GPPsun can be calculated through the sunlit light use efficiency (ɛsun), the main focus becomes the acquisition of ɛsun. Previous studies using site level reflectance observations have shown the ability of the photochemical reflectance ratio (PRR, defined as the ratio between the reflectance from an effective band centered around 531nm and a reference band) in tracking the variation of ɛsun for an evergreen coniferous stand and a deciduous broadleaf stand separately and the potential of a NDVI corrected PRR (NPRR, defined as the product of NDVI and PRR) in producing a general expression to describe the NPRR-ɛsun relationship across different plant function types. In this study, a significant correlation (R2 = 0.67, p<0.001) between the MODIS derived NPRR and the site level ɛsun calculated using flux data for four Canadian flux sites has been found for the year 2010. For validation purpose, the ɛsun in 2009 for the same sites are calculated using the MODIS NPRR and the expression from 2010. The MODIS derived ɛsun matches well with the flux calculated ɛsun (R2 = 0.57, p<0.001). Same expression has then been applied over a 217 × 193 km area in Saskatchewan, Canada to obtain the ɛsun and thus GPPsun for the region during the growing season in 2008 (day 150 to day 260). The Vcmax for the region is inverted using the GPPsun and the result is validated at three flux sites inside the area. The results show that the approach is able to obtain good estimations of Vcmax values with R2 = 0.68 and RMSE = 8.8 μmol m-2 s-1.

  6. Cumulative Poisson Distribution Program

    Science.gov (United States)

    Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert

    1990-01-01

    Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.

  7. Divergent Cumulative Cultural Evolution

    OpenAIRE

    Marriott, Chris; Chebib, Jobran

    2016-01-01

    Divergent cumulative cultural evolution occurs when the cultural evolutionary trajectory diverges from the biological evolutionary trajectory. We consider the conditions under which divergent cumulative cultural evolution can occur. We hypothesize that two conditions are necessary. First that genetic and cultural information are stored separately in the agent. Second cultural information must be transferred horizontally between agents of different generations. We implement a model with these ...

  8. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  9. CUMBIN - CUMULATIVE BINOMIAL PROGRAMS

    Science.gov (United States)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, CUMBIN, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), can be used independently of one another. CUMBIN can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CUMBIN calculates the probability that a system of n components has at least k operating if the probability that any one operating is p and the components are independent. Equivalently, this is the reliability of a k-out-of-n system having independent components with common reliability p. CUMBIN can evaluate the incomplete beta distribution for two positive integer arguments. CUMBIN can also evaluate the cumulative F distribution and the negative binomial distribution, and can determine the sample size in a test design. CUMBIN is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. The CUMBIN program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMBIN was developed in 1988.

  10. EXAFS cumulants of CdSe

    International Nuclear Information System (INIS)

    Diop, D.

    1997-04-01

    EXAFS functions had been extracted from measurements on the K edge of Se at different temperatures between 20 and 300 K. The analysis of the EXAFS of the filtered first two shells has been done in the wavevector range laying between 2 and 15.5 A -1 in terms of the cumulants of the effective distribution of distances. The cumulants C 3 and C 4 obtained from the phase difference and the amplitude ratio methods have shown the anharmonicity in the vibrations of atoms around their equilibrium position. (author). 13 refs, 3 figs

  11. Cumulation of light nuclei

    International Nuclear Information System (INIS)

    Baldin, A.M.; Bondarev, V.K.; Golovanov, L.B.

    1977-01-01

    Limit fragmentation of light nuclei (deuterium, helium) bombarded with 8,6 GeV/c protons was investigated. Fragments (pions, protons and deuterons) were detected within the emission angle 50-150 deg with regard to primary protons and within the pulse range 150-180 MeV/c. By the kinematics of collision of a primary proton with a target at rest the fragments observed correspond to a target mass upto 3 GeV. Thus, the data obtained correspond to teh cumulation upto the third order

  12. Cumulative radiation dose of multiple trauma patients during their hospitalization

    International Nuclear Information System (INIS)

    Wang Zhikang; Sun Jianzhong; Zhao Zudan

    2012-01-01

    Objective: To study the cumulative radiation dose of multiple trauma patients during their hospitalization and to analyze the dose influence factors. Methods: The DLP for CT and DR were retrospectively collected from the patients during June, 2009 and April, 2011 at a university affiliated hospital. The cumulative radiation doses were calculated by summing typical effective doses of the anatomic regions scanned. Results: The cumulative radiation doses of 113 patients were collected. The maximum,minimum and the mean values of cumulative effective doses were 153.3, 16.48 mSv and (52.3 ± 26.6) mSv. Conclusions: Multiple trauma patients have high cumulative radiation exposure. Therefore, the management of cumulative radiation doses should be enhanced. To establish the individualized radiation exposure archives will be helpful for the clinicians and technicians to make decision whether to image again and how to select the imaging parameters. (authors)

  13. CROSSER - CUMULATIVE BINOMIAL PROGRAMS

    Science.gov (United States)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, CROSSER, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), can be used independently of one another. CROSSER can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CROSSER calculates the point at which the reliability of a k-out-of-n system equals the common reliability of the n components. It is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The CROSSER program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CROSSER was developed in 1988.

  14. Cumulative environmental effects. Summary

    International Nuclear Information System (INIS)

    2012-01-01

    This report presents a compilation of knowledge about the state of the environment and human activity in the Norwegian part of the North Sea and Skagerrak. The report gives an overview of pressures and impacts on the environment from normal activity and in the event of accidents. This is used to assess the cumulative environmental effects, which factors have most impact and where the impacts are greatest, and to indicate which problems are expected to be most serious in the future. The report is intended to provide relevant information that can be used in the management of the marine area in the future. It also provides input for the identification of environmental targets and management measures for the North Sea and Skagerrak.(Author)

  15. Cumulative environmental effects. Summary

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-07-01

    This report presents a compilation of knowledge about the state of the environment and human activity in the Norwegian part of the North Sea and Skagerrak. The report gives an overview of pressures and impacts on the environment from normal activity and in the event of accidents. This is used to assess the cumulative environmental effects, which factors have most impact and where the impacts are greatest, and to indicate which problems are expected to be most serious in the future. The report is intended to provide relevant information that can be used in the management of the marine area in the future. It also provides input for the identification of environmental targets and management measures for the North Sea and Skagerrak.(Author)

  16. NEWTONP - CUMULATIVE BINOMIAL PROGRAMS

    Science.gov (United States)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, NEWTONP, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), can be used independently of one another. NEWTONP can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. NEWTONP calculates the probably p required to yield a given system reliability V for a k-out-of-n system. It can also be used to determine the Clopper-Pearson confidence limits (either one-sided or two-sided) for the parameter p of a Bernoulli distribution. NEWTONP can determine Bayesian probability limits for a proportion (if the beta prior has positive integer parameters). It can determine the percentiles of incomplete beta distributions with positive integer parameters. It can also determine the percentiles of F distributions and the midian plotting positions in probability plotting. NEWTONP is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. NEWTONP is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The NEWTONP program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. NEWTONP was developed in 1988.

  17. Cumulative radiation effect

    International Nuclear Information System (INIS)

    Kirk, J.; Gray, W.M.; Watson, E.R.

    1977-01-01

    In five previous papers, the concept of Cumulative Radiation Effect (CRE) has been presented as a scale of accumulative sub-tolerance radiation damage, with a unique value of the CRE describing a specific level of radiation effect. Simple nomographic and tabular methods for the solution of practical problems in radiotherapy are now described. An essential feature of solving a CRE problem is firstly to present it in a concise and readily appreciated form, and, to do this, nomenclature has been introduced to describe schedules and regimes as compactly as possible. Simple algebraic equations have been derived to describe the CRE achieved by multi-schedule regimes. In these equations, the equivalence conditions existing at the junctions between schedules are not explicit and the equations are based on the CREs of the constituent schedules assessed individually without reference to their context in the regime as a whole. This independent evaluation of CREs for each schedule has resulted in a considerable simplification in the calculation of complex problems. The calculations are further simplified by the use of suitable tables and nomograms, so that the mathematics involved is reduced to simple arithmetical operations which require at the most the use of a slide rule but can be done by hand. The order of procedure in the presentation and calculation of CRE problems can be summarised in an evaluation procedure sheet. The resulting simple methods for solving practical problems of any complexity on the CRE-system are demonstrated by a number of examples. (author)

  18. Cumulative radiation effect

    International Nuclear Information System (INIS)

    Kirk, J.; Cain, O.; Gray, W.M.

    1977-01-01

    Cumulative Radiation Effect (CRE) represents a scale of accumulative sub-tolerance radiation damage, with a unique value of the CRE describing a specific level of radiation effect. Computer calculations have been used to simplify the evaluation of problems associated with the applications of the CRE-system in radiotherapy. In a general appraisal of the applications of computers to the CRE-system, the various problems encountered in clinical radiotherapy have been categorised into those involving the evaluation of a CRE at a point in tissue and those involving the calculation of CRE distributions. As a general guide, the computer techniques adopted at the Glasgow Institute of Radiotherapeutics for the solution of CRE problems are presented, and consist basically of a package of three interactive programs for point CRE calculations and a Fortran program which calculates CRE distributions for iso-effect treatment planning. Many examples are given to demonstrate the applications of these programs, and special emphasis has been laid on the problem of treating a point in tissue with different doses per fraction on alternate treatment days. The wide range of possible clinical applications of the CRE-system has been outlined and described under the categories of routine clinical applications, retrospective and prospective surveys of patient treatment, and experimental and theoretical research. Some of these applications such as the results of surveys and studies of time optimisation of treatment schedules could have far-reaching consequences and lead to significant improvements in treatment and cure rates with the minimum damage to normal tissue. (author)

  19. Secant cumulants and toric geometry

    NARCIS (Netherlands)

    Michalek, M.; Oeding, L.; Zwiernik, P.W.

    2012-01-01

    We study the secant line variety of the Segre product of projective spaces using special cumulant coordinates adapted for secant varieties. We show that the secant variety is covered by open normal toric varieties. We prove that in cumulant coordinates its ideal is generated by binomial quadrics. We

  20. The challenge of cumulative impacts

    Energy Technology Data Exchange (ETDEWEB)

    Masden, Elisabeth

    2011-07-01

    Full text: As governments pledge to combat climate change, wind turbines are becoming a common feature of terrestrial and marine environments. Although wind power is a renewable energy source and a means of reducing carbon emissions, there is a need to ensure that the wind farms themselves do not damage the environment. There is particular concern over the impacts of wind farms on bird populations, and with increasing numbers of wind farm proposals, the concern focuses on cumulative impacts. Individually, a wind farm, or indeed any activity/action, may have minor effects on the environment, but collectively these may be significant, potentially greater than the sum of the individual parts acting alone. Cumulative impact assessment is a legislative requirement of environmental impact assessment but such assessments are rarely adequate restricting the acquisition of basic knowledge about the cumulative impacts of wind farms on bird populations. Reasons for this are numerous but a recurring theme is the lack of clear definitions and guidance on how to perform cumulative assessments. Here we present a conceptual framework and include illustrative examples to demonstrate how the framework can be used to improve the planning and execution of cumulative impact assessments. The core concept is that explicit definitions of impacts, actions and scales of assessment are required to reduce uncertainty in the process of assessment and improve communication between stake holders. Only when it is clear what has been included within a cumulative assessment, is it possible to make comparisons between developments. Our framework requires improved legislative guidance on the actions to include in assessments, and advice on the appropriate baselines against which to assess impacts. Cumulative impacts are currently considered on restricted scales (spatial and temporal) relating to individual development assessments. We propose that benefits would be gained from elevating cumulative

  1. Higher order cumulants in colorless partonic plasma

    Energy Technology Data Exchange (ETDEWEB)

    Cherif, S. [Sciences and Technologies Department, University of Ghardaia, Ghardaia, Algiers (Algeria); Laboratoire de Physique et de Mathématiques Appliquées (LPMA), ENS-Kouba (Bachir El-Ibrahimi), Algiers (Algeria); Ahmed, M. A. A. [Department of Physics, College of Science, Taibah University Al-Madinah Al-Mounawwarah KSA (Saudi Arabia); Department of Physics, Taiz University in Turba, Taiz (Yemen); Laboratoire de Physique et de Mathématiques Appliquées (LPMA), ENS-Kouba (Bachir El-Ibrahimi), Algiers (Algeria); Ladrem, M., E-mail: mladrem@yahoo.fr [Department of Physics, College of Science, Taibah University Al-Madinah Al-Mounawwarah KSA (Saudi Arabia); Laboratoire de Physique et de Mathématiques Appliquées (LPMA), ENS-Kouba (Bachir El-Ibrahimi), Algiers (Algeria)

    2016-06-10

    Any physical system considered to study the QCD deconfinement phase transition certainly has a finite volume, so the finite size effects are inevitably present. This renders the location of the phase transition and the determination of its order as an extremely difficult task, even in the simplest known cases. In order to identify and locate the colorless QCD deconfinement transition point in finite volume T{sub 0}(V), a new approach based on the finite-size cumulant expansion of the order parameter and the ℒ{sub m,n}-Method is used. We have shown that both cumulants of higher order and their ratios, associated to the thermodynamical fluctuations of the order parameter, in QCD deconfinement phase transition behave in a particular enough way revealing pronounced oscillations in the transition region. The sign structure and the oscillatory behavior of these in the vicinity of the deconfinement phase transition point might be a sensitive probe and may allow one to elucidate their relation to the QCD phase transition point. In the context of our model, we have shown that the finite volume transition point is always associated to the appearance of a particular point in whole higher order cumulants under consideration.

  2. Evolution model with a cumulative feedback coupling

    Science.gov (United States)

    Trimper, Steffen; Zabrocki, Knud; Schulz, Michael

    2002-05-01

    The paper is concerned with a toy model that generalizes the standard Lotka-Volterra equation for a certain population by introducing a competition between instantaneous and accumulative, history-dependent nonlinear feedback the origin of which could be a contribution from any kind of mismanagement in the past. The results depend on the sign of that additional cumulative loss or gain term of strength λ. In case of a positive coupling the system offers a maximum gain achieved after a finite time but the population will die out in the long time limit. In this case the instantaneous loss term of strength u is irrelevant and the model exhibits an exact solution. In the opposite case λ<0 the time evolution of the system is terminated in a crash after ts provided u=0. This singularity after a finite time can be avoided if u≠0. The approach may well be of relevance for the qualitative understanding of more realistic descriptions.

  3. Cumulative risk, cumulative outcome: a 20-year longitudinal study.

    Directory of Open Access Journals (Sweden)

    Leslie Atkinson

    Full Text Available Cumulative risk (CR models provide some of the most robust findings in the developmental literature, predicting numerous and varied outcomes. Typically, however, these outcomes are predicted one at a time, across different samples, using concurrent designs, longitudinal designs of short duration, or retrospective designs. We predicted that a single CR index, applied within a single sample, would prospectively predict diverse outcomes, i.e., depression, intelligence, school dropout, arrest, smoking, and physical disease from childhood to adulthood. Further, we predicted that number of risk factors would predict number of adverse outcomes (cumulative outcome; CO. We also predicted that early CR (assessed at age 5/6 explains variance in CO above and beyond that explained by subsequent risk (assessed at ages 12/13 and 19/20. The sample consisted of 284 individuals, 48% of whom were diagnosed with a speech/language disorder. Cumulative risk, assessed at 5/6-, 12/13-, and 19/20-years-old, predicted aforementioned outcomes at age 25/26 in every instance. Furthermore, number of risk factors was positively associated with number of negative outcomes. Finally, early risk accounted for variance beyond that explained by later risk in the prediction of CO. We discuss these findings in terms of five criteria posed by these data, positing a "mediated net of adversity" model, suggesting that CR may increase some central integrative factor, simultaneously augmenting risk across cognitive, quality of life, psychiatric and physical health outcomes.

  4. The Algebra of the Cumulative Percent Operation.

    Science.gov (United States)

    Berry, Andrew J.

    2002-01-01

    Discusses how to help students avoid some pervasive reasoning errors in solving cumulative percent problems. Discusses the meaning of ."%+b%." the additive inverse of ."%." and other useful applications. Emphasizes the operational aspect of the cumulative percent concept. (KHR)

  5. Adaptive strategies for cumulative cultural learning.

    Science.gov (United States)

    Ehn, Micael; Laland, Kevin

    2012-05-21

    The demographic and ecological success of our species is frequently attributed to our capacity for cumulative culture. However, it is not yet known how humans combine social and asocial learning to generate effective strategies for learning in a cumulative cultural context. Here we explore how cumulative culture influences the relative merits of various pure and conditional learning strategies, including pure asocial and social learning, critical social learning, conditional social learning and individual refiner strategies. We replicate the Rogers' paradox in the cumulative setting. However, our analysis suggests that strategies that resolved Rogers' paradox in a non-cumulative setting may not necessarily evolve in a cumulative setting, thus different strategies will optimize cumulative and non-cumulative cultural learning. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. 32 CFR 651.16 - Cumulative impacts.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 4 2010-07-01 2010-07-01 true Cumulative impacts. 651.16 Section 651.16... § 651.16 Cumulative impacts. (a) NEPA analyses must assess cumulative effects, which are the impact on the environment resulting from the incremental impact of the action when added to other past, present...

  7. A paradox of cumulative culture.

    Science.gov (United States)

    Kobayashi, Yutaka; Wakano, Joe Yuichiro; Ohtsuki, Hisashi

    2015-08-21

    Culture can grow cumulatively if socially learnt behaviors are improved by individual learning before being passed on to the next generation. Previous authors showed that this kind of learning strategy is unlikely to be evolutionarily stable in the presence of a trade-off between learning and reproduction. This is because culture is a public good that is freely exploited by any member of the population in their model (cultural social dilemma). In this paper, we investigate the effect of vertical transmission (transmission from parents to offspring), which decreases the publicness of culture, on the evolution of cumulative culture in both infinite and finite population models. In the infinite population model, we confirm that culture accumulates largely as long as transmission is purely vertical. It turns out, however, that introduction of even slight oblique transmission drastically reduces the equilibrium level of culture. Even more surprisingly, if the population size is finite, culture hardly accumulates even under purely vertical transmission. This occurs because stochastic extinction due to random genetic drift prevents a learning strategy from accumulating enough culture. Overall, our theoretical results suggest that introducing vertical transmission alone does not really help solve the cultural social dilemma problem. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Sharpening Sharpe Ratios

    OpenAIRE

    William N. Goetzmann; Jonathan E. Ingersoll Jr.; Matthew I. Spiegel; Ivo Welch

    2002-01-01

    It is now well known that the Sharpe ratio and other related reward-to-risk measures may be manipulated with option-like strategies. In this paper we derive the general conditions for achieving the maximum expected Sharpe ratio. We derive static rules for achieving the maximum Sharpe ratio with two or more options, as well as a continuum of derivative contracts. The optimal strategy has a truncated right tail and a fat left tail. We also derive dynamic rules for increasing the Sharpe ratio. O...

  9. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  10. Hydrologic Cycle Response to the Paleocene-Eocene Thermal Maximum at Austral, High-Latitude Site 690 as Revealed by In Situ Measurements of Foraminiferal Oxygen Isotope and Mg/Ca Ratios

    Science.gov (United States)

    Kozdon, R.; Kelly, D.; Fournelle, J.; Valley, J. W.

    2012-12-01

    Earth surface temperatures warmed by ~5°C during an ancient (~55.5 Ma) global warming event termed the Paleocene-Eocene thermal maximum (PETM). This transient (~200 ka) "hyperthermal" climate state had profound consequences for the planet's surficial processes and biosphere, and is widely touted as being an ancient analog for climate change driven by human activities. Hallmarks of the PETM are pervasive carbonate dissolution in the ocean basins and a negative carbon isotope excursion (CIE) recorded in variety of substrates including soil and marine carbonates. Together these lines of evidence signal the rapid (≤30 ka) release of massive quantities (≥2000 Gt) of 13C-depleted carbon into the exogenic carbon cycle. Paleoenvironmental reconstructions based on pedogenic features in paleosols, clay mineralogy and sedimentology of coastal and continental deposits, and land-plant communities indicate that PETM warmth was accompanied by a major perturbation to the hydrologic cycle. Micropaleontological evidence and n-alkane hydrogen isotope records indicate that increased poleward moisture transport reduced sea-surface salinities (SSSs) in the central Arctic Ocean during the PETM. Such findings are broadly consistent with predictions of climate model simulations. Here we reassess a well-studied PETM record from the Southern Ocean (ODP Site 690) in light of new δ18O and Mg/Ca data obtained from planktic foraminiferal shells by secondary ion mass spectrometry (SIMS) and electron microprobe analysis (EMPA), respectively. The unparalleled spatial resolution of these in situ techniques permits extraction of more reliable δ18O and Mg/Ca data by targeting of minute (≤10 μm spots), biogenic domains within individual planktic foraminifera that retain the original shell chemistry (Kozdon et al. 2011, Paleocean.). In general, the stratigraphic profile and magnitude of the δ18O decrease (~2.2‰) delimiting PETM warming in our SIMS-generated record are similar to those of

  11. Cumulative trauma disorders: A review.

    Science.gov (United States)

    Iqbal, Zaheen A; Alghadir, Ahmad H

    2017-08-03

    Cumulative trauma disorder (CTD) is a term for various injuries of the musculoskeletal and nervous systems that are caused by repetitive tasks, forceful exertions, vibrations, mechanical compression or sustained postures. Although there are many studies citing incidence of CTDs, there are fewer articles about its etiology, pathology and management. The aim of our study was to discuss the etiology, pathogenesis, prevention and management of CTDs. A literature search was performed using various electronic databases. The search was limited to articles in English language pertaining to randomized clinical trials, cohort studies and systematic reviews of CTDs. A total of 180 papers were identified to be relevant published since 1959. Out of these, 125 papers reported about its incidence and 50 about its conservative treatment. Workplace environment, same task repeatability and little variability, decreased time for rest, increase in expectations are major factors for developing CTDs. Prevention of its etiology and early diagnosis can be the best to decrease its incidence and severity. For effective management of CTDs, its treatment should be divided into Primordial, Primary, Secondary and Tertiary prevention.

  12. Complete cumulative index (1963-1983)

    International Nuclear Information System (INIS)

    1983-01-01

    This complete cumulative index covers all regular and special issues and supplements published by Atomic Energy Review (AER) during its lifetime (1963-1983). The complete cumulative index consists of six Indexes: the Index of Abstracts, the Subject Index, the Title Index, the Author Index, the Country Index and the Table of Elements Index. The complete cumulative index supersedes the Cumulative Indexes for Volumes 1-7: 1963-1969 (1970), and for Volumes 1-10: 1963-1972 (1972); this Index also finalizes Atomic Energy Review, the publication of which has recently been terminated by the IAEA

  13. System-Reliability Cumulative-Binomial Program

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, NEWTONP, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), used independently of one another. Program finds probability required to yield given system reliability. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Program written in C.

  14. Common-Reliability Cumulative-Binomial Program

    Science.gov (United States)

    Scheuer, Ernest, M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CROSSER, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), used independently of one another. Point of equality between reliability of system and common reliability of components found. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Program written in C.

  15. Cumulative human impacts on marine predators

    DEFF Research Database (Denmark)

    Maxwell, Sara M; Hazen, Elliott L; Bograd, Steven J

    2013-01-01

    Stressors associated with human activities interact in complex ways to affect marine ecosystems, yet we lack spatially explicit assessments of cumulative impacts on ecologically and economically key components such as marine predators. Here we develop a metric of cumulative utilization and impact...

  16. Cumulative Student Loan Debt in Minnesota, 2015

    Science.gov (United States)

    Williams-Wyche, Shaun

    2016-01-01

    To better understand student debt in Minnesota, the Minnesota Office of Higher Education (the Office) gathers information on cumulative student loan debt from Minnesota degree-granting institutions. These data detail the number of students with loans by institution, the cumulative student loan debt incurred at that institution, and the percentage…

  17. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  18. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  19. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  20. Finite-volume cumulant expansion in QCD-colorless plasma

    Energy Technology Data Exchange (ETDEWEB)

    Ladrem, M. [Taibah University, Physics Department, Faculty of Science, Al-Madinah, Al-Munawwarah (Saudi Arabia); Physics Department, Algiers (Algeria); ENS-Vieux Kouba (Bachir El-Ibrahimi), Laboratoire de Physique et de Mathematiques Appliquees (LPMA), Algiers (Algeria); Ahmed, M.A.A. [Taibah University, Physics Department, Faculty of Science, Al-Madinah, Al-Munawwarah (Saudi Arabia); ENS-Vieux Kouba (Bachir El-Ibrahimi), Laboratoire de Physique et de Mathematiques Appliquees (LPMA), Algiers (Algeria); Taiz University in Turba, Physics Department, Taiz (Yemen); Alfull, Z.Z. [Taibah University, Physics Department, Faculty of Science, Al-Madinah, Al-Munawwarah (Saudi Arabia); Cherif, S. [ENS-Vieux Kouba (Bachir El-Ibrahimi), Laboratoire de Physique et de Mathematiques Appliquees (LPMA), Algiers (Algeria); Ghardaia University, Sciences and Technologies Department, Ghardaia (Algeria)

    2015-09-15

    Due to the finite-size effects, the localization of the phase transition in finite systems and the determination of its order, become an extremely difficult task, even in the simplest known cases. In order to identify and locate the finite-volume transition point T{sub 0}(V) of the QCD deconfinement phase transition to a colorless QGP, we have developed a new approach using the finite-size cumulant expansion of the order parameter and the L{sub mn}-method. The first six cumulants C{sub 1,2,3,4,5,6} with the corresponding under-normalized ratios (skewness Σ, kurtosis κ, pentosis Π{sub ±}, and hexosis H{sub 1,2,3}) and three unnormalized combinations of them, (O = σ{sup 2}κΣ{sup -1},U = σ{sup -2}Σ{sup -1},N = σ{sup 2}κ) are calculated and studied as functions of (T, V). A new approach, unifying in a clear and consistent way the definitions of cumulant ratios, is proposed.Anumerical FSS analysis of the obtained results has allowed us to locate accurately the finite-volume transition point. The extracted transition temperature value T{sub 0}(V) agrees with that expected T{sub 0}{sup N}(V) from the order parameter and the thermal susceptibility χ{sub T} (T, V), according to the standard procedure of localization to within about 2%. In addition to this, a very good correlation factor is obtained proving the validity of our cumulants method. The agreement of our results with those obtained by means of other models is remarkable. (orig.)

  1. Golden Ratio

    Indian Academy of Sciences (India)

    Our attraction to another body increases if the body is symmetricaland in proportion. If a face or a structure is in proportion,we are more likely to notice it and find it beautiful.The universal ratio of beauty is the 'Golden Ratio', found inmany structures. This ratio comes from Fibonacci numbers.In this article, we explore this ...

  2. Golden Ratio

    Indian Academy of Sciences (India)

    Keywords. Fibonacci numbers, golden ratio, Sanskrit prosody, solar panel. Abstract. Our attraction to another body increases if the body is symmetricaland in proportion. If a face or a structure is in proportion,we are more likely to notice it and find it beautiful.The universal ratio of beauty is the 'Golden Ratio', found inmany ...

  3. Golden Ratio

    Indian Academy of Sciences (India)

    Our attraction to another body increases if the body is sym- metrical and in proportion. If a face or a structure is in pro- portion, we are more likely to notice it and find it beautiful. The universal ratio of beauty is the 'Golden Ratio', found in many structures. This ratio comes from Fibonacci numbers. In this article, we explore this ...

  4. Origin of path independence between cumulative CO2 emissions and global warming

    Science.gov (United States)

    Seshadri, Ashwin K.

    2017-11-01

    Observations and GCMs exhibit approximate proportionality between cumulative carbon dioxide (CO2) emissions and global warming. Here we identify sufficient conditions for the relationship between cumulative CO2 emissions and global warming to be independent of the path of CO2 emissions; referred to as "path independence". Our starting point is a closed form expression for global warming in a two-box energy balance model (EBM), which depends explicitly on cumulative emissions, airborne fraction and time. Path independence requires that this function can be approximated as depending on cumulative emissions alone. We show that path independence arises from weak constraints, occurring if the timescale for changes in cumulative emissions (equal to ratio between cumulative emissions and emissions rate) is small compared to the timescale for changes in airborne fraction (which depends on CO2 uptake), and also small relative to a derived climate model parameter called the damping-timescale, which is related to the rate at which deep-ocean warming affects global warming. Effects of uncertainties in the climate model and carbon cycle are examined. Large deep-ocean heat capacity in the Earth system is not necessary for path independence, which appears resilient to climate modeling uncertainties. However long time-constants in the Earth system carbon cycle are essential, ensuring that airborne fraction changes slowly with timescale much longer than the timescale for changes in cumulative emissions. Therefore path independence between cumulative emissions and warming cannot arise for short-lived greenhouse gases.

  5. The Relationship between Gender, Cumulative Adversities and ...

    African Journals Online (AJOL)

    The Relationship between Gender, Cumulative Adversities and Mental Health of Employees in ... CAs were measured in three forms (family adversities (CAFam), personal adversities ... Age of employees ranged between 18-65 years.

  6. Cumulative cultural learning: Development and diversity

    Science.gov (United States)

    2017-01-01

    The complexity and variability of human culture is unmatched by any other species. Humans live in culturally constructed niches filled with artifacts, skills, beliefs, and practices that have been inherited, accumulated, and modified over generations. A causal account of the complexity of human culture must explain its distinguishing characteristics: It is cumulative and highly variable within and across populations. I propose that the psychological adaptations supporting cumulative cultural transmission are universal but are sufficiently flexible to support the acquisition of highly variable behavioral repertoires. This paper describes variation in the transmission practices (teaching) and acquisition strategies (imitation) that support cumulative cultural learning in childhood. Examining flexibility and variation in caregiver socialization and children’s learning extends our understanding of evolution in living systems by providing insight into the psychological foundations of cumulative cultural transmission—the cornerstone of human cultural diversity. PMID:28739945

  7. Complexity and demographic explanations of cumulative culture

    NARCIS (Netherlands)

    Querbes, A.; Vaesen, K.; Houkes, W.N.

    2014-01-01

    Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century) to demographic change. According to these models, cumulation of technological

  8. Cumulative human impacts on marine predators.

    Science.gov (United States)

    Maxwell, Sara M; Hazen, Elliott L; Bograd, Steven J; Halpern, Benjamin S; Breed, Greg A; Nickel, Barry; Teutschel, Nicole M; Crowder, Larry B; Benson, Scott; Dutton, Peter H; Bailey, Helen; Kappes, Michelle A; Kuhn, Carey E; Weise, Michael J; Mate, Bruce; Shaffer, Scott A; Hassrick, Jason L; Henry, Robert W; Irvine, Ladd; McDonald, Birgitte I; Robinson, Patrick W; Block, Barbara A; Costa, Daniel P

    2013-01-01

    Stressors associated with human activities interact in complex ways to affect marine ecosystems, yet we lack spatially explicit assessments of cumulative impacts on ecologically and economically key components such as marine predators. Here we develop a metric of cumulative utilization and impact (CUI) on marine predators by combining electronic tracking data of eight protected predator species (n=685 individuals) in the California Current Ecosystem with data on 24 anthropogenic stressors. We show significant variation in CUI with some of the highest impacts within US National Marine Sanctuaries. High variation in underlying species and cumulative impact distributions means that neither alone is sufficient for effective spatial management. Instead, comprehensive management approaches accounting for both cumulative human impacts and trade-offs among multiple stressors must be applied in planning the use of marine resources.

  9. Cumulative cultural learning: Development and diversity.

    Science.gov (United States)

    Legare, Cristine H

    2017-07-24

    The complexity and variability of human culture is unmatched by any other species. Humans live in culturally constructed niches filled with artifacts, skills, beliefs, and practices that have been inherited, accumulated, and modified over generations. A causal account of the complexity of human culture must explain its distinguishing characteristics: It is cumulative and highly variable within and across populations. I propose that the psychological adaptations supporting cumulative cultural transmission are universal but are sufficiently flexible to support the acquisition of highly variable behavioral repertoires. This paper describes variation in the transmission practices (teaching) and acquisition strategies (imitation) that support cumulative cultural learning in childhood. Examining flexibility and variation in caregiver socialization and children's learning extends our understanding of evolution in living systems by providing insight into the psychological foundations of cumulative cultural transmission-the cornerstone of human cultural diversity.

  10. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  11. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  12. Calculating Cumulative Binomial-Distribution Probabilities

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  13. About the cumulants of periodic signals

    Science.gov (United States)

    Barrau, Axel; El Badaoui, Mohammed

    2018-01-01

    This note studies cumulants of time series. These functions originating from the probability theory being commonly used as features of deterministic signals, their classical properties are examined in this modified framework. We show additivity of cumulants, ensured in the case of independent random variables, requires here a different hypothesis. Practical applications are proposed, in particular an analysis of the failure of the JADE algorithm to separate some specific periodic signals.

  14. Cumulative effects assessment: Does scale matter?

    International Nuclear Information System (INIS)

    Therivel, Riki; Ross, Bill

    2007-01-01

    Cumulative effects assessment (CEA) is (or should be) an integral part of environmental assessment at both the project and the more strategic level. CEA helps to link the different scales of environmental assessment in that it focuses on how a given receptor is affected by the totality of plans, projects and activities, rather than on the effects of a particular plan or project. This article reviews how CEAs consider, and could consider, scale issues: spatial extent, level of detail, and temporal issues. It is based on an analysis of Canadian project-level CEAs and UK strategic-level CEAs. Based on a review of literature and, especially, case studies with which the authors are familiar, it concludes that scale issues are poorly considered at both levels, with particular problems being unclear or non-existing cumulative effects scoping methodologies; poor consideration of past or likely future human activities beyond the plan or project in question; attempts to apportion 'blame' for cumulative effects; and, at the plan level, limited management of cumulative effects caused particularly by the absence of consent regimes. Scale issues are important in most of these problems. However both strategic-level and project-level CEA have much potential for managing cumulative effects through better siting and phasing of development, demand reduction and other behavioural changes, and particularly through setting development consent rules for projects. The lack of strategic resource-based thresholds constrains the robust management of strategic-level cumulative effects

  15. Research on the Characteristics and Mechanism of the Cumulative Release of Antimony from an Antimony Smelting Slag Stacking Area under Rainfall Leaching

    Science.gov (United States)

    Zhou, Yingying; Deng, Renjian

    2017-01-01

    We aimed to study the characteristics and the mechanism of the cumulative release of antimony at an antimony smelting slag stacking area in southern China. A series of dynamic and static leaching experiments to simulate the effects of rainfall were carried out. The results showed that the release of antimony from smelting slag increased with a decrease in the solid-liquid ratio, and the maximum accumulated release was found to be 42.13 mg Sb/kg waste and 34.26 mg Sb/kg waste with a solid/liquid ratio of 1 : 20; the maximum amount of antimony was released within 149–420 μm size fraction with 7.09 mg/L of the cumulative leaching. Also, the antimony release was the greatest and most rapid at pH 7.0 with the minimum release found at pH 4.0. With an increase in rainfall duration, the antimony release increased. The influence of variation in rainfall intensity on the release of antimony from smelting slag was small. PMID:28804669

  16. Sex ratios

    OpenAIRE

    West, Stuart A; Reece, S E; Sheldon, Ben C

    2002-01-01

    Sex ratio theory attempts to explain variation at all levels (species, population, individual, brood) in the proportion of offspring that are male (the sex ratio). In many cases this work has been extremely successful, providing qualitative and even quantitative explanations of sex ratio variation. However, this is not always the situation, and one of the greatest remaining problems is explaining broad taxonomic patterns. Specifically, why do different organisms show so ...

  17. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  18. The proportional odds cumulative incidence model for competing risks

    DEFF Research Database (Denmark)

    Eriksson, Frank; Li, Jianing; Scheike, Thomas

    2015-01-01

    We suggest an estimator for the proportional odds cumulative incidence model for competing risks data. The key advantage of this model is that the regression parameters have the simple and useful odds ratio interpretation. The model has been considered by many authors, but it is rarely used...... in practice due to the lack of reliable estimation procedures. We suggest such procedures and show that their performance improve considerably on existing methods. We also suggest a goodness-of-fit test for the proportional odds assumption. We derive the large sample properties and provide estimators...

  19. Predicting Cumulative Incidence Probability: Marginal and Cause-Specific Modelling

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2005-01-01

    cumulative incidence probability; cause-specific hazards; subdistribution hazard; binomial modelling......cumulative incidence probability; cause-specific hazards; subdistribution hazard; binomial modelling...

  20. Predicting Cumulative Incidence Probability by Direct Binomial Regression

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard......Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard...

  1. Managing cumulative impacts: A key to sustainability?

    Energy Technology Data Exchange (ETDEWEB)

    Hunsaker, C.T.

    1994-12-31

    This paper addresses how science can be more effectively used in creating policy to manage cumulative effects on ecosystems. The paper focuses on the scientific techniques that we have to identify and to assess cumulative impacts on ecosystems. The term ``sustainable development`` was brought into common use by the World Commission on Environment and Development (The Brundtland Commission) in 1987. The Brundtland Commission report highlighted the need to simultaneously address developmental and environmental imperatives simultaneously by calling for development that ``meets the needs of the present generation without compromising the needs of future generations.`` We cannot claim to be working toward sustainable development until we can quantitatively assess cumulative impacts on the environment: The two concepts are inextricibally linked in that the elusiveness of cumulative effects likely has the greatest potential of keeping us from achieving sustainability. In this paper, assessment and management frameworks relevant to cumulative impacts are discussed along with recent literature on how to improve such assessments. When possible, examples are given for marine ecosystems.

  2. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  3. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  4. Perspectives on cumulative risks and impacts.

    Science.gov (United States)

    Faust, John B

    2010-01-01

    Cumulative risks and impacts have taken on different meanings in different regulatory and programmatic contexts at federal and state government levels. Traditional risk assessment methodologies, with considerable limitations, can provide a framework for the evaluation of cumulative risks from chemicals. Under an environmental justice program in California, cumulative impacts are defined to include exposures, public health effects, or environmental effects in a geographic area from the emission or discharge of environmental pollution from all sources, through all media. Furthermore, the evaluation of these effects should take into account sensitive populations and socioeconomic factors where possible and to the extent data are available. Key aspects to this potential approach include the consideration of exposures (versus risk), socioeconomic factors, the geographic or community-level assessment scale, and the inclusion of not only health effects but also environmental effects as contributors to impact. Assessments of this type extend the boundaries of the types of information that toxicologists generally provide for risk management decisions.

  5. Cumulative processes and quark distribution in nuclei

    International Nuclear Information System (INIS)

    Kondratyuk, L.; Shmatikov, M.

    1984-01-01

    Assuming existence of multiquark (mainly 12q) bags in nuclei the spectra of cumulative nucleons and mesons produced in high-energy particle-nucleus collisions are discussed. The exponential form of quark momentum distribution in 12q-bag (agreeing well with the experimental data on lepton-nucleus interactions at large q 2 ) is shown to result in quasi-exponential distribution of cumulative particles over the light-cone variable αsub(B). The dependence of f(αsub(B); psub(perpendicular)) (where psub(perpendicular) is the transverse momentum of the bag) upon psub(perpendicular) is considered. The yields of cumulative resonances as well as effects related to the u- and d-quark distributions in N > Z nuclei being different are dicscussed

  6. Cumulative Culture and Future Thinking: Is Mental Time Travel a Prerequisite to Cumulative Cultural Evolution?

    Science.gov (United States)

    Vale, G. L.; Flynn, E. G.; Kendal, R. L.

    2012-01-01

    Cumulative culture denotes the, arguably, human capacity to build on the cultural behaviors of one's predecessors, allowing increases in cultural complexity to occur such that many of our cultural artifacts, products and technologies have progressed beyond what a single individual could invent alone. This process of cumulative cultural evolution…

  7. Cumulative effects of wind turbines. A guide to assessing the cumulative effects of wind energy development

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    This guidance provides advice on how to assess the cumulative effects of wind energy developments in an area and is aimed at developers, planners, and stakeholders interested in the development of wind energy in the UK. The principles of cumulative assessment, wind energy development in the UK, cumulative assessment of wind energy development, and best practice conclusions are discussed. The identification and assessment of the cumulative effects is examined in terms of global environmental sustainability, local environmental quality and socio-economic activity. Supplementary guidance for assessing the principle cumulative effects on the landscape, on birds, and on the visual effect is provided. The consensus building approach behind the preparation of this guidance is outlined in the annexes of the report.

  8. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  9. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  10. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  11. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  12. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  13. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  14. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  15. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  16. Multiparty correlation measure based on the cumulant

    International Nuclear Information System (INIS)

    Zhou, D. L.; Zeng, B.; Xu, Z.; You, L.

    2006-01-01

    We propose a genuine multiparty correlation measure for a multiparty quantum system as the trace norm of the cumulant of the state. The legitimacy of our multiparty correlation measure is explicitly demonstrated by proving it satisfies the five basic conditions required for a correlation measure. As an application we construct an efficient algorithm for the calculation of our measures for all stabilizer states

  17. Decision analysis with cumulative prospect theory.

    Science.gov (United States)

    Bayoumi, A M; Redelmeier, D A

    2000-01-01

    Individuals sometimes express preferences that do not follow expected utility theory. Cumulative prospect theory adjusts for some phenomena by using decision weights rather than probabilities when analyzing a decision tree. The authors examined how probability transformations from cumulative prospect theory might alter a decision analysis of a prophylactic therapy in AIDS, eliciting utilities from patients with HIV infection (n = 75) and calculating expected outcomes using an established Markov model. They next focused on transformations of three sets of probabilities: 1) the probabilities used in calculating standard-gamble utility scores; 2) the probabilities of being in discrete Markov states; 3) the probabilities of transitioning between Markov states. The same prophylaxis strategy yielded the highest quality-adjusted survival under all transformations. For the average patient, prophylaxis appeared relatively less advantageous when standard-gamble utilities were transformed. Prophylaxis appeared relatively more advantageous when state probabilities were transformed and relatively less advantageous when transition probabilities were transformed. Transforming standard-gamble and transition probabilities simultaneously decreased the gain from prophylaxis by almost half. Sensitivity analysis indicated that even near-linear probability weighting transformations could substantially alter quality-adjusted survival estimates. The magnitude of benefit estimated in a decision-analytic model can change significantly after using cumulative prospect theory. Incorporating cumulative prospect theory into decision analysis can provide a form of sensitivity analysis and may help describe when people deviate from expected utility theory.

  18. Cumulative watershed effects: a research perspective

    Science.gov (United States)

    Leslie M. Reid; Robert R. Ziemer

    1989-01-01

    A cumulative watershed effect (CWE) is any response to multiple land-use activities that is caused by, or results in, altered watershed function. The CWE issue is politically defined, as is the significance of particular impacts. But the processes generating CWEs are the traditional focus of geomorphology and ecology, and have thus been studied for decades. The CWE...

  19. An evaluation paradigm for cumulative impact analysis

    Science.gov (United States)

    Stakhiv, Eugene Z.

    1988-09-01

    Cumulative impact analysis is examined from a conceptual decision-making perspective, focusing on its implicit and explicit purposes as suggested within the policy and procedures for environmental impact analysis of the National Environmental Policy Act of 1969 (NEPA) and its implementing regulations. In this article it is also linked to different evaluation and decision-making conventions, contrasting a regulatory context with a comprehensive planning framework. The specific problems that make the application of cumulative impact analysis a virtually intractable evaluation requirement are discussed in connection with the federal regulation of wetlands uses. The relatively familiar US Army Corps of Engineers' (the Corps) permit program, in conjunction with the Environmental Protection Agency's (EPA) responsibilities in managing its share of the Section 404 regulatory program requirements, is used throughout as the realistic context for highlighting certain pragmatic evaluation aspects of cumulative impact assessment. To understand the purposes of cumulative impact analysis (CIA), a key distinction must be made between the implied comprehensive and multiobjective evaluation purposes of CIA, promoted through the principles and policies contained in NEPA, and the more commonly conducted and limited assessment of cumulative effects (ACE), which focuses largely on the ecological effects of human actions. Based on current evaluation practices within the Corps' and EPA's permit programs, it is shown that the commonly used screening approach to regulating wetlands uses is not compatible with the purposes of CIA, nor is the environmental impact statement (EIS) an appropriate vehicle for evaluating the variety of objectives and trade-offs needed as part of CIA. A heuristic model that incorporates the basic elements of CIA is developed, including the idea of trade-offs among social, economic, and environmental protection goals carried out within the context of environmental

  20. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  1. Evaluating the maximum patient radiation dose in cardiac interventional procedures

    International Nuclear Information System (INIS)

    Kato, M.; Chida, K.; Sato, T.; Oosaka, H.; Tosa, T.; Kadowaki, K.

    2011-01-01

    Many of the X-ray systems that are used for cardiac interventional radiology provide no way to evaluate the patient maximum skin dose (MSD). The authors report a new method for evaluating the MSD by using the cumulative patient entrance skin dose (ESD), which includes a back-scatter factor and the number of cine-angiography frames during percutaneous coronary intervention (PCI). Four hundred consecutive PCI patients (315 men and 85 women) were studied. The correlation between the cumulative ESD and number of cine-angiography frames was investigated. The irradiation and overlapping fields were verified using dose-mapping software. A good correlation was found between the cumulative ESD and the number of cine-angiography frames. The MSD could be estimated using the proportion of cine-angiography frames used for the main angle of view relative to the total number of cine-angiography frames and multiplying this by the cumulative ESD. The average MSD (3.0±1.9 Gy) was lower than the average cumulative ESD (4.6±2.6 Gy). This method is an easy way to estimate the MSD during PCI. (authors)

  2. The cumulative burden of double-stranded DNA virus detection after allogeneic HCT is associated with increased mortality.

    Science.gov (United States)

    Hill, Joshua A; Mayer, Bryan T; Xie, Hu; Leisenring, Wendy M; Huang, Meei-Li; Stevens-Ayers, Terry; Milano, Filippo; Delaney, Colleen; Sorror, Mohamed L; Sandmaier, Brenda M; Nichols, Garrett; Zerr, Danielle M; Jerome, Keith R; Schiffer, Joshua T; Boeckh, Michael

    2017-04-20

    Strategies to prevent active infection with certain double-stranded DNA (dsDNA) viruses after allogeneic hematopoietic cell transplantation (HCT) are limited by incomplete understanding of their epidemiology and clinical impact. We retrospectively tested weekly plasma samples from allogeneic HCT recipients at our center from 2007 to 2014. We used quantitative PCR to test for cytomegalovirus, BK polyomavirus, human herpesvirus 6B, HHV-6A, adenovirus, and Epstein-Barr virus between days 0 and 100 post-HCT. We evaluated risk factors for detection of multiple viruses and association of viruses with mortality through day 365 post-HCT with Cox models. Among 404 allogeneic HCT recipients, including 125 cord blood, 125 HLA-mismatched, and 154 HLA-matched HCTs, detection of multiple viruses was common through day 100: 90% had ≥1, 62% had ≥2, 28% had ≥3, and 5% had 4 or 5 viruses. Risk factors for detection of multiple viruses included cord blood or HLA-mismatched HCT, myeloablative conditioning, and acute graft-versus-host disease ( P values < .01). Absolute lymphocyte count of <200 cells/mm 3 was associated with greater virus exposure on the basis of the maximum cumulative viral load area under the curve (AUC) ( P = .054). The maximum cumulative viral load AUC was the best predictor of early (days 0-100) and late (days 101-365) overall mortality (adjusted hazard ratio [aHR] = 1.36, 95% confidence interval [CI] [1.25, 1.49], and aHR = 1.04, 95% CI [1.0, 1.08], respectively) after accounting for immune reconstitution and graft-versus-host disease. In conclusion, detection of multiple dsDNA viruses was frequent after allogeneic HCT and had a dose-dependent association with increased mortality. These data suggest opportunities to improve outcomes with better antiviral strategies. © 2017 by The American Society of Hematology.

  3. Sharing a quota on cumulative carbon emissions

    International Nuclear Information System (INIS)

    Raupach, Michael R.; Davis, Steven J.; Peters, Glen P.; Andrew, Robbie M.; Canadell, Josep G.; Ciais, Philippe

    2014-01-01

    Any limit on future global warming is associated with a quota on cumulative global CO 2 emissions. We translate this global carbon quota to regional and national scales, on a spectrum of sharing principles that extends from continuation of the present distribution of emissions to an equal per-capita distribution of cumulative emissions. A blend of these endpoints emerges as the most viable option. For a carbon quota consistent with a 2 C warming limit (relative to pre-industrial levels), the necessary long-term mitigation rates are very challenging (typically over 5% per year), both because of strong limits on future emissions from the global carbon quota and also the likely short-term persistence in emissions growth in many regions. (authors)

  4. Complexity and demographic explanations of cumulative culture.

    Science.gov (United States)

    Querbes, Adrien; Vaesen, Krist; Houkes, Wybo

    2014-01-01

    Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century) to demographic change. According to these models, cumulation of technological complexity is inhibited by decreasing--while favoured by increasing--population levels. Here we show that these findings are contingent on how complexity is defined: demography plays a much more limited role in sustaining cumulative culture in case formal models deploy Herbert Simon's definition of complexity rather than the particular definitions of complexity hitherto assumed. Given that currently available empirical evidence doesn't afford discriminating proper from improper definitions of complexity, our robustness analyses put into question the force of recent demographic explanations of particular episodes of cultural change.

  5. Complexity and demographic explanations of cumulative culture.

    Directory of Open Access Journals (Sweden)

    Adrien Querbes

    Full Text Available Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century to demographic change. According to these models, cumulation of technological complexity is inhibited by decreasing--while favoured by increasing--population levels. Here we show that these findings are contingent on how complexity is defined: demography plays a much more limited role in sustaining cumulative culture in case formal models deploy Herbert Simon's definition of complexity rather than the particular definitions of complexity hitherto assumed. Given that currently available empirical evidence doesn't afford discriminating proper from improper definitions of complexity, our robustness analyses put into question the force of recent demographic explanations of particular episodes of cultural change.

  6. Conceptual models for cumulative risk assessment.

    Science.gov (United States)

    Linder, Stephen H; Sexton, Ken

    2011-12-01

    In the absence of scientific consensus on an appropriate theoretical framework, cumulative risk assessment and related research have relied on speculative conceptual models. We argue for the importance of theoretical backing for such models and discuss 3 relevant theoretical frameworks, each supporting a distinctive "family" of models. Social determinant models postulate that unequal health outcomes are caused by structural inequalities; health disparity models envision social and contextual factors acting through individual behaviors and biological mechanisms; and multiple stressor models incorporate environmental agents, emphasizing the intermediary role of these and other stressors. The conclusion is that more careful reliance on established frameworks will lead directly to improvements in characterizing cumulative risk burdens and accounting for disproportionate adverse health effects.

  7. Childhood Cumulative Risk and Later Allostatic Load

    DEFF Research Database (Denmark)

    Doan, Stacey N; Dich, Nadya; Evans, Gary W

    2014-01-01

    State, followed for 8 years (between the ages 9 and 17). Poverty- related stress was computed using the cumulative risk approach, assessing stressors across 9 domains, including environmental, psychosocial, and demographic factors. Allostatic load captured a range of physiological responses, including......Objective: The present study investigated the long-term impact of exposure to poverty-related stressors during childhood on allostatic load, an index of physiological dysregulation, and the potential mediating role of substance use. Method: Participants (n = 162) were rural children from New York...... cardiovascular, hypothalamic pituitary adrenal axis, sympathetic adrenal medullary system, and metabolic activity. Smoking and alcohol/drug use were tested as mediators of the hypothesized childhood risk-adolescent allostatic load relationship. Results: Cumulative risk exposure at age 9 predicted increases...

  8. Fuzzy set theory for cumulative trauma prediction

    OpenAIRE

    Fonseca, Daniel J.; Merritt, Thomas W.; Moynihan, Gary P.

    2001-01-01

    A widely used fuzzy reasoning algorithm was modified and implemented via an expert system to assess the potential risk of employee repetitive strain injury in the workplace. This fuzzy relational model, known as the Priority First Cover Algorithm (PFC), was adapted to describe the relationship between 12 cumulative trauma disorders (CTDs) of the upper extremity, and 29 identified risk factors. The algorithm, which finds a suboptimal subset from a group of variables based on the criterion of...

  9. Sikap Kerja Duduk Terhadap Cumulative Trauma Disorder

    OpenAIRE

    Rahmawati, Yulita; Sugiharto, -

    2011-01-01

    Permasalahan yang diteliti adalah adakah hubungan antara sikap kerja duduk dengan kejadian Cumulative Trauma Disorder (CTD) pada pekerja bagian pengamplasan di PT. Geromar Jepara. Tujuan yang ingin dicapai adalah untuk mengetahui hubungan antara sikap kerja duduk dengan kejadian CTD pada pekerja bagian pengamplasan. Metode penelitian ini bersifat explanatory dengan menggunakan pendekatan belah lintang. Populasi dalam penelitian ini adalah pekerja bagian pengamplasan sebanyak 30 orang. Teknik ...

  10. Power Reactor Docket Information. Annual cumulation (citations)

    International Nuclear Information System (INIS)

    1977-12-01

    An annual cumulation of the citations to the documentation associated with civilian nuclear power plants is presented. This material is that which is submitted to the U.S. Nuclear Regulatory Commission in support of applications for construction and operating licenses. Citations are listed by Docket number in accession number sequence. The Table of Contents is arranged both by Docket number and by nuclear power plant name

  11. Cumulative Effect of Depression on Dementia Risk

    OpenAIRE

    Olazarán, J.; Trincado, R.; Bermejo-Pareja, F.

    2013-01-01

    Objective. To analyze a potential cumulative effect of life-time depression on dementia and Alzheimer's disease (AD), with control of vascular factors (VFs). Methods. This study was a subanalysis of the Neurological Disorders in Central Spain (NEDICES) study. Past and present depression, VFs, dementia status, and dementia due to AD were documented at study inception. Dementia status was also documented after three years. Four groups were created according to baseline data: never depression (n...

  12. Cumulative release to the accessible environment

    International Nuclear Information System (INIS)

    Kanehiro, B.

    1985-01-01

    The Containment and Isolation Working Group considered issues related to the postclosure behavior of repositories in crystalline rock. This working group was further divided into subgroups to consider the progress since the 1978 GAIN Symposium and identify research needs in the individual areas of regional ground-water flow, ground-water travel time, fractional release, and cumulative release. The analysis and findings of the Fractional Release Subgroup are presented

  13. EPA Workshop on Epigenetics and Cumulative Risk ...

    Science.gov (United States)

    Agenda Download the Workshop Agenda (PDF) The workshop included presentations and discussions by scientific experts pertaining to three topics (i.e., epigenetic changes associated with diverse stressors, key science considerations in understanding epigenetic changes, and practical application of epigenetic tools to address cumulative risks from environmental stressors), to address several questions under each topic, and included an opportunity for attendees to participate in break-out groups, provide comments and ask questions. Workshop Goals The workshop seeks to examine the opportunity for use of aggregate epigenetic change as an indicator in cumulative risk assessment for populations exposed to multiple stressors that affect epigenetic status. Epigenetic changes are specific molecular changes around DNA that alter expression of genes. Epigenetic changes include DNA methylation, formation of histone adducts, and changes in micro RNAs. Research today indicates that epigenetic changes are involved in many chronic diseases (cancer, cardiovascular disease, obesity, diabetes, mental health disorders, and asthma). Research has also linked a wide range of stressors including pollution and social factors with occurrence of epigenetic alterations. Epigenetic changes have the potential to reflect impacts of risk factors across multiple stages of life. Only recently receiving attention is the nexus between the factors of cumulative exposure to environmental

  14. Cumulative irritation potential of topical retinoid formulations.

    Science.gov (United States)

    Leyden, James J; Grossman, Rachel; Nighland, Marge

    2008-08-01

    Localized irritation can limit treatment success with topical retinoids such as tretinoin and adapalene. The factors that influence irritant reactions have been shown to include individual skin sensitivity, the particular retinoid and concentration used, and the vehicle formulation. To compare the cutaneous tolerability of tretinoin 0.04% microsphere gel (TMG) with that of adapalene 0.3% gel and a standard tretinoin 0.025% cream. The results of 2 randomized, investigator-blinded studies of 2 to 3 weeks' duration, which utilized a split-face method to compare cumulative irritation scores induced by topical retinoids in subjects with healthy skin, were combined. Study 1 compared TMG 0.04% with adapalene 0.3% gel over 2 weeks, while study 2 compared TMG 0.04% with tretinoin 0.025% cream over 3 weeks. In study 1, TMG 0.04% was associated with significantly lower cumulative scores for erythema, dryness, and burning/stinging than adapalene 0.3% gel. However, in study 2, there were no significant differences in cumulative irritation scores between TMG 0.04% and tretinoin 0.025% cream. Measurements of erythema by a chromameter showed no significant differences between the test formulations in either study. Cutaneous tolerance of TMG 0.04% on the face was superior to that of adapalene 0.3% gel and similar to that of a standard tretinoin cream containing a lower concentration of the drug (0.025%).

  15. Cumulative phase delay imaging for contrast-enhanced ultrasound tomography

    International Nuclear Information System (INIS)

    Demi, Libertario; Van Sloun, Ruud J G; Wijkstra, Hessel; Mischi, Massimo

    2015-01-01

    Standard dynamic-contrast enhanced ultrasound (DCE-US) imaging detects and estimates ultrasound-contrast-agent (UCA) concentration based on the amplitude of the nonlinear (harmonic) components generated during ultrasound (US) propagation through UCAs. However, harmonic components generation is not specific to UCAs, as it also occurs for US propagating through tissue. Moreover, nonlinear artifacts affect standard DCE-US imaging, causing contrast to tissue ratio reduction, and resulting in possible misclassification of tissue and misinterpretation of UCA concentration. Furthermore, no contrast-specific modality exists for DCE-US tomography; in particular speed-of-sound changes due to UCAs are well within those caused by different tissue types. Recently, a new marker for UCAs has been introduced. A cumulative phase delay (CPD) between the second harmonic and fundamental component is in fact observable for US propagating through UCAs, and is absent in tissue. In this paper, tomographic US images based on CPD are for the first time presented and compared to speed-of-sound US tomography. Results show the applicability of this marker for contrast specific US imaging, with cumulative phase delay imaging (CPDI) showing superior capabilities in detecting and localizing UCA, as compared to speed-of-sound US tomography. Cavities (filled with UCA) which were down to 1 mm in diameter were clearly detectable. Moreover, CPDI is free of the above mentioned nonlinear artifacts. These results open important possibilities to DCE-US tomography, with potential applications to breast imaging for cancer localization. (fast track communication)

  16. A bivariate optimal replacement policy with cumulative repair cost ...

    Indian Academy of Sciences (India)

    Min-Tsai Lai

    Shock model; cumulative damage model; cumulative repair cost limit; preventive maintenance model. 1. Introduction ... with two types of shocks: one type is failure shock, and the other type is damage ...... Theory, methods and applications.

  17. On interference of cumulative proton production mechanisms

    International Nuclear Information System (INIS)

    Braun, M.A.; Vechernin, V.V.

    1993-01-01

    The dynamical picture of the cumulative proton production in hA-collisions by means of diagram analysis with NN interaction described by a non-relativistic NN potential is considered. The contributions of the various mechanisms (spectator, direct and rescattering) for backward hemisphere proton production within the framework of this common approach is calculated. The emphasis is on the comparison of the relative contributions of these mechanisms for various angles, taking into account the interference of these contributions. Comparison with experimental data is also presented. (author)

  18. Preserved cumulative semantic interference despite amnesia

    Directory of Open Access Journals (Sweden)

    Gary Michael Oppenheim

    2015-05-01

    As predicted by Oppenheim et al’s (2010 implicit incremental learning account, WRP’s BCN RTs demonstrated strong (and significant repetition priming and semantic blocking effects (Figure 1. Similar to typical results from neurally intact undergraduates, WRP took longer to name pictures presented in semantically homogeneous blocks than in heterogeneous blocks, an effect that increased with each cycle. This result challenges accounts that ascribe cumulative semantic interference in this task to explicit memory mechanisms, instead suggesting that the effect has the sort of implicit learning bases that are typically spared in hippocampal amnesia.

  19. Is cumulated pyrethroid exposure associated with prediabetes?

    DEFF Research Database (Denmark)

    Hansen, Martin Rune; Jørs, Erik; Lander, Flemming

    2014-01-01

    was to investigate an association between exposure to pyrethroids and abnormal glucose regulation (prediabetes or diabetes). A cross-sectional study was performed among 116 pesticide sprayers from public vector control programs in Bolivia and 92 nonexposed controls. Pesticide exposure (duration, intensity...... pyrethroids, a significant positive trend was observed between cumulative pesticide exposure (total number of hours sprayed) and adjusted OR of abnormal glucose regulation, with OR 14.7 [0.9-235] in the third exposure quintile. The study found a severely increased prevalence of prediabetes among Bolivian...

  20. Chapter 19. Cumulative watershed effects and watershed analysis

    Science.gov (United States)

    Leslie M. Reid

    1998-01-01

    Cumulative watershed effects are environmental changes that are affected by more than.one land-use activity and that are influenced by.processes involving the generation or transport.of water. Almost all environmental changes are.cumulative effects, and almost all land-use.activities contribute to cumulative effects

  1. Original and cumulative prospect theory: a discussion of empirical differences

    NARCIS (Netherlands)

    Wakker, P.P.; Fennema, H.

    1997-01-01

    This note discusses differences between prospect theory and cumulative prospect theory. It shows that cumulative prospect theory is not merely a formal correction of some theoretical problems in prospect theory, but it also gives different predictions. Experiments are described that favor cumulative

  2. Cumulative Environmental Management Association : Wood Buffalo Region

    International Nuclear Information System (INIS)

    Friesen, B.

    2001-01-01

    The recently announced oil sands development of the Wood Buffalo Region in Alberta was the focus of this power point presentation. Both mining and in situ development is expected to total $26 billion and 2.6 million barrels per day of bitumen production. This paper described the economic, social and environmental challenges facing the resource development of this region. In addition to the proposed oil sands projects, this region will accommodate the needs of conventional oil and gas production, forestry, building of pipelines and power lines, municipal development, recreation, tourism, mining exploration and open cast mining. The Cumulative Environmental Management Association (CEMA) was inaugurated as a non-profit association in April 2000, and includes 41 members from all sectors. Its major role is to ensure a sustainable ecosystem and to avoid any cumulative impacts on wildlife. Other work underway includes the study of soil and plant species diversity, and the effects of air emissions on human health, wildlife and vegetation. The bioaccumulation of heavy metals and their impacts on surface water and fish is also under consideration to ensure the quality and quantity of surface water and ground water. 3 figs

  3. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  4. Psychometric properties of the Cumulated Ambulation Score

    DEFF Research Database (Denmark)

    Ferriero, Giorgio; Kristensen, Morten T; Invernizzi, Marco

    2018-01-01

    INTRODUCTION: In the geriatric population, independent mobility is a key factor in determining readiness for discharge following acute hospitalization. The Cumulated Ambulation Score (CAS) is a potentially valuable score that allows day-to-day measurements of basic mobility. The CAS was developed...... and validated in older patients with hip fracture as an early postoperative predictor of short-term outcome, but it is also used to assess geriatric in-patients with acute medical illness. Despite the fast- accumulating literature on the CAS, to date no systematic review synthesizing its psychometric properties....... Of 49 studies identified, 17 examined the psychometric properties of the CAS. EVIDENCE SYNTHESIS: Most papers dealt with patients after hip fracture surgery, and only 4 studies assessed the CAS psychometric characteristics also in geriatric in-patients with acute medical illness. Two versions of CAS...

  5. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  6. Comparison of measured and estimated maximum skin doses during CT fluoroscopy lung biopsies

    Energy Technology Data Exchange (ETDEWEB)

    Zanca, F., E-mail: Federica.Zanca@med.kuleuven.be [Department of Radiology, Leuven University Center of Medical Physics in Radiology, UZ Leuven, Herestraat 49, 3000 Leuven, Belgium and Imaging and Pathology Department, UZ Leuven, Herestraat 49, Box 7003 3000 Leuven (Belgium); Jacobs, A. [Department of Radiology, Leuven University Center of Medical Physics in Radiology, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium); Crijns, W. [Department of Radiotherapy, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium); De Wever, W. [Imaging and Pathology Department, UZ Leuven, Herestraat 49, Box 7003 3000 Leuven, Belgium and Department of Radiology, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium)

    2014-07-15

    Purpose: To measure patient-specific maximum skin dose (MSD) associated with CT fluoroscopy (CTF) lung biopsies and to compare measured MSD with the MSD estimated from phantom measurements, as well as with the CTDIvol of patient examinations. Methods: Data from 50 patients with lung lesions who underwent a CT fluoroscopy-guided biopsy were collected. The CT protocol consisted of a low-kilovoltage (80 kV) protocol used in combination with an algorithm for dose reduction to the radiology staff during the interventional procedure, HandCare (HC). MSD was assessed during each intervention using EBT2 gafchromic films positioned on patient skin. Lesion size, position, total fluoroscopy time, and patient-effective diameter were registered for each patient. Dose rates were also estimated at the surface of a normal-size anthropomorphic thorax phantom using a 10 cm pencil ionization chamber placed at every 30°, for a full rotation, with and without HC. Measured MSD was compared with MSD values estimated from the phantom measurements and with the cumulative CTDIvol of the procedure. Results: The median measured MSD was 141 mGy (range 38–410 mGy) while the median cumulative CTDIvol was 72 mGy (range 24–262 mGy). The ratio between the MSD estimated from phantom measurements and the measured MSD was 0.87 (range 0.12–4.1) on average. In 72% of cases the estimated MSD underestimated the measured MSD, while in 28% of the cases it overestimated it. The same trend was observed for the ratio of cumulative CTDIvol and measured MSD. No trend was observed as a function of patient size. Conclusions: On average, estimated MSD from dose rate measurements on phantom as well as from CTDIvol of patient examinations underestimates the measured value of MSD. This can be attributed to deviations of the patient's body habitus from the standard phantom size and to patient positioning in the gantry during the procedure.

  7. Comparison of measured and estimated maximum skin doses during CT fluoroscopy lung biopsies

    International Nuclear Information System (INIS)

    Zanca, F.; Jacobs, A.; Crijns, W.; De Wever, W.

    2014-01-01

    Purpose: To measure patient-specific maximum skin dose (MSD) associated with CT fluoroscopy (CTF) lung biopsies and to compare measured MSD with the MSD estimated from phantom measurements, as well as with the CTDIvol of patient examinations. Methods: Data from 50 patients with lung lesions who underwent a CT fluoroscopy-guided biopsy were collected. The CT protocol consisted of a low-kilovoltage (80 kV) protocol used in combination with an algorithm for dose reduction to the radiology staff during the interventional procedure, HandCare (HC). MSD was assessed during each intervention using EBT2 gafchromic films positioned on patient skin. Lesion size, position, total fluoroscopy time, and patient-effective diameter were registered for each patient. Dose rates were also estimated at the surface of a normal-size anthropomorphic thorax phantom using a 10 cm pencil ionization chamber placed at every 30°, for a full rotation, with and without HC. Measured MSD was compared with MSD values estimated from the phantom measurements and with the cumulative CTDIvol of the procedure. Results: The median measured MSD was 141 mGy (range 38–410 mGy) while the median cumulative CTDIvol was 72 mGy (range 24–262 mGy). The ratio between the MSD estimated from phantom measurements and the measured MSD was 0.87 (range 0.12–4.1) on average. In 72% of cases the estimated MSD underestimated the measured MSD, while in 28% of the cases it overestimated it. The same trend was observed for the ratio of cumulative CTDIvol and measured MSD. No trend was observed as a function of patient size. Conclusions: On average, estimated MSD from dose rate measurements on phantom as well as from CTDIvol of patient examinations underestimates the measured value of MSD. This can be attributed to deviations of the patient's body habitus from the standard phantom size and to patient positioning in the gantry during the procedure

  8. MAXIMUM LIKELIHOOD CLASSIFICATION OF HIGH-RESOLUTION SAR IMAGES IN URBAN AREA

    Directory of Open Access Journals (Sweden)

    M. Soheili Majd

    2012-09-01

    Full Text Available In this work, we propose a state-of-the-art on statistical analysis of polarimetric synthetic aperture radar (SAR data, through the modeling of several indices. We concentrate on eight ground classes which have been carried out from amplitudes, co-polarisation ratio, depolarization ratios, and other polarimetric descriptors. To study their different statistical behaviours, we consider Gauss, log- normal, Beta I, Weibull, Gamma, and Fisher statistical models and estimate their parameters using three methods: method of moments (MoM, maximum-likelihood (ML methodology, and log-cumulants method (MoML. Then, we study the opportunity of introducing this information in an adapted supervised classification scheme based on Maximum–Likelihood and Fisher pdf. Our work relies on an image of a suburban area, acquired by the airborne RAMSES SAR sensor of ONERA. The results prove the potential of such data to discriminate urban surfaces and show the usefulness of adapting any classical classification algorithm however classification maps present a persistant class confusion between flat gravelled or concrete roofs and trees.

  9. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  10. 7 CFR 42.132 - Determining cumulative sum values.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Determining cumulative sum values. 42.132 Section 42... Determining cumulative sum values. (a) The parameters for the on-line cumulative sum sampling plans for AQL's... 3 1 2.5 3 1 2 1 (b) At the beginning of the basic inspection period, the CuSum value is set equal to...

  11. Improving cumulative effects assessment in Alberta: Regional strategic assessment

    International Nuclear Information System (INIS)

    Johnson, Dallas; Lalonde, Kim; McEachern, Menzie; Kenney, John; Mendoza, Gustavo; Buffin, Andrew; Rich, Kate

    2011-01-01

    The Government of Alberta, Canada is developing a regulatory framework to better manage cumulative environmental effects from development in the province. A key component of this effort is regional planning, which will lay the primary foundation for cumulative effects management into the future. Alberta Environment has considered the information needs of regional planning and has concluded that Regional Strategic Assessment may offer significant advantages if integrated into the planning process, including the overall improvement of cumulative environmental effects assessment in the province.

  12. Output factors and scatter ratios

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, P N; Summers, R E; Samulski, T V; Baird, L C [Allegheny General Hospital, Pittsburgh, PA (USA); Ahuja, A S; Dubuque, G L; Hendee, W R; Chhabra, A S

    1979-07-01

    Reference is made to a previous publication on output factors and scatter ratios for radiotherapy units in which it was suggested that the output factor should be included in the definitions of scatter-air ratio and tissue-maximum ratio. In the present correspondence from other authors and from the authors of the previous publication, the original definitions and the proposed changes are discussed. Radiation scatter from source and collimator degradation of beam energy and calculation of dose in tissue are considered in relation to the objective of accurate dosimetry.

  13. Children neglected: Where cumulative risk theory fails.

    Science.gov (United States)

    O'Hara, Mandy; Legano, Lori; Homel, Peter; Walker-Descartes, Ingrid; Rojas, Mary; Laraque, Danielle

    2015-07-01

    Neglected children, by far the majority of children maltreated, experience an environment most deficient in cognitive stimulation and language exchange. When physical abuse co-occurs with neglect, there is more stimulation through negative parent-child interaction, which may lead to better cognitive outcomes, contrary to Cumulative Risk Theory. The purpose of the current study was to assess whether children only neglected perform worse on cognitive tasks than children neglected and physically abused. Utilizing LONGSCAN archived data, 271 children only neglected and 101 children neglected and physically abused in the first four years of life were compared. The two groups were assessed at age 6 on the WPPSI-R vocabulary and block design subtests, correlates of cognitive intelligence. Regression analyses were performed, controlling for additional predictors of poor cognitive outcome, including socioeconomic variables and caregiver depression. Children only neglected scored significantly worse than children neglected and abused on the WPPSI-R vocabulary subtest (p=0.03). The groups did not differ on the block design subtest (p=0.4). This study shows that for neglected children, additional abuse may not additively accumulate risk when considering intelligence outcomes. Children experiencing only neglect may need to be referred for services that address cognitive development, with emphasis on the linguistic environment, in order to best support the developmental challenges of neglected children. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Standardization of the cumulative absolute velocity

    International Nuclear Information System (INIS)

    O'Hara, T.F.; Jacobson, J.P.

    1991-12-01

    EPRI NP-5930, ''A Criterion for Determining Exceedance of the Operating Basis Earthquake,'' was published in July 1988. As defined in that report, the Operating Basis Earthquake (OBE) is exceeded when both a response spectrum parameter and a second damage parameter, referred to as the Cumulative Absolute Velocity (CAV), are exceeded. In the review process of the above report, it was noted that the calculation of CAV could be confounded by time history records of long duration containing low (nondamaging) acceleration. Therefore, it is necessary to standardize the method of calculating CAV to account for record length. This standardized methodology allows consistent comparisons between future CAV calculations and the adjusted CAV threshold value based upon applying the standardized methodology to the data set presented in EPRI NP-5930. The recommended method to standardize the CAV calculation is to window its calculation on a second-by-second basis for a given time history. If the absolute acceleration exceeds 0.025g at any time during each one second interval, the earthquake records used in EPRI NP-5930 have been reanalyzed and the adjusted threshold of damage for CAV was found to be 0.16g-set

  15. Analysis of Memory Codes and Cumulative Rehearsal in Observational Learning

    Science.gov (United States)

    Bandura, Albert; And Others

    1974-01-01

    The present study examined the influence of memory codes varying in meaningfulness and retrievability and cumulative rehearsal on retention of observationally learned responses over increasing temporal intervals. (Editor)

  16. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  17. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  18. Cumulative Effect of Depression on Dementia Risk

    Directory of Open Access Journals (Sweden)

    J. Olazarán

    2013-01-01

    Full Text Available Objective. To analyze a potential cumulative effect of life-time depression on dementia and Alzheimer’s disease (AD, with control of vascular factors (VFs. Methods. This study was a subanalysis of the Neurological Disorders in Central Spain (NEDICES study. Past and present depression, VFs, dementia status, and dementia due to AD were documented at study inception. Dementia status was also documented after three years. Four groups were created according to baseline data: never depression (nD, past depression (pD, present depression (prD, and present and past depression (prpD. Logistic regression was used. Results. Data of 1,807 subjects were investigated at baseline (mean age 74.3, 59.3% women, and 1,376 (81.6% subjects were evaluated after three years. The prevalence of dementia at baseline was 6.7%, and dementia incidence was 6.3%. An effect of depression was observed on dementia prevalence (OR [CI 95%] 1.84 [1.01–3.35] for prD and 2.73 [1.08–6.87] for prpD, and on dementia due to AD (OR 1.98 [0.98–3.99] for prD and OR 3.98 [1.48–10.71] for prpD (fully adjusted models, nD as reference. Depression did not influence dementia incidence. Conclusions. Present depression and, particularly, present and past depression are associated with dementia at old age. Multiple mechanisms, including toxic effect of depression on hippocampal neurons, plausibly explain these associations.

  19. Quantitative cumulative biodistribution of antibodies in mice

    Science.gov (United States)

    Yip, Victor; Palma, Enzo; Tesar, Devin B; Mundo, Eduardo E; Bumbaca, Daniela; Torres, Elizabeth K; Reyes, Noe A; Shen, Ben Q; Fielder, Paul J; Prabhu, Saileta; Khawli, Leslie A; Boswell, C Andrew

    2014-01-01

    The neonatal Fc receptor (FcRn) plays an important and well-known role in antibody recycling in endothelial and hematopoietic cells and thus it influences the systemic pharmacokinetics (PK) of immunoglobulin G (IgG). However, considerably less is known about FcRn’s role in the metabolism of IgG within individual tissues after intravenous administration. To elucidate the organ distribution and gain insight into the metabolism of humanized IgG1 antibodies with different binding affinities FcRn, comparative biodistribution studies in normal CD-1 mice were conducted. Here, we generated variants of herpes simplex virus glycoprotein D-specific antibody (humanized anti-gD) with increased and decreased FcRn binding affinity by genetic engineering without affecting antigen specificity. These antibodies were expressed in Chinese hamster ovary cell lines, purified and paired radiolabeled with iodine-125 and indium-111. Equal amounts of I-125-labeled and In-111-labeled antibodies were mixed and intravenously administered into mice at 5 mg/kg. This approach allowed us to measure both the real-time IgG uptake (I-125) and cumulative uptake of IgG and catabolites (In-111) in individual tissues up to 1 week post-injection. The PK and distribution of the wild-type IgG and the variant with enhanced binding for FcRn were largely similar to each other, but vastly different for the rapidly cleared low-FcRn-binding variant. Uptake in individual tissues varied across time, FcRn binding affinity, and radiolabeling method. The liver and spleen emerged as the most concentrated sites of IgG catabolism in the absence of FcRn protection. These data provide an increased understanding of FcRn’s role in antibody PK and catabolism at the tissue level. PMID:24572100

  20. A Framework for Treating Cumulative Trauma with Art Therapy

    Science.gov (United States)

    Naff, Kristina

    2014-01-01

    Cumulative trauma is relatively undocumented in art therapy practice, although there is growing evidence that art therapy provides distinct benefits for resolving various traumas. This qualitative study proposes an art therapy treatment framework for cumulative trauma derived from semi-structured interviews with three art therapists and artistic…

  1. Cumulative effects of forest management activities: how might they occur?

    Science.gov (United States)

    R. M. Rice; R. B. Thomas

    1985-01-01

    Concerns are often voiced about possible environmental damage as the result of the cumulative sedimentation effects of logging and forest road construction. In response to these concerns, National Forests are developing procedures to reduce the possibility that their activities may lead to unacceptable cumulative effects

  2. Cumulative effect in multiple production processes on nuclei

    International Nuclear Information System (INIS)

    Golubyatnikova, E.S.; Shmonin, V.L.; Kalinkin, B.N.

    1989-01-01

    It is shown that the cumulative effect is a natural result of the process of hadron multiple production in nuclear reactions. Interpretation is made of the universality of slopes of inclusive spectra and other characteristics of cumulative hadrons. The character of information from such reactions is discussed, which could be helpful in studying the mechanism of multiparticle production. 27 refs.; 4 figs

  3. Cumulative particle production in the quark recombination model

    International Nuclear Information System (INIS)

    Gavrilov, V.B.; Leksin, G.A.

    1987-01-01

    Production of cumulative particles in hadron-nuclear inteactions at high energies is considered within the framework of recombination quark model. Predictions for inclusive cross sections of production of cumulative particles and different resonances containing quarks in s state are made

  4. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  5. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  6. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  7. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  8. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  9. High cumulants of conserved charges and their statistical uncertainties

    Science.gov (United States)

    Li-Zhu, Chen; Ye-Yin, Zhao; Xue, Pan; Zhi-Ming, Li; Yuan-Fang, Wu

    2017-10-01

    We study the influence of measured high cumulants of conserved charges on their associated statistical uncertainties in relativistic heavy-ion collisions. With a given number of events, the measured cumulants randomly fluctuate with an approximately normal distribution, while the estimated statistical uncertainties are found to be correlated with corresponding values of the obtained cumulants. Generally, with a given number of events, the larger the cumulants we measure, the larger the statistical uncertainties that are estimated. The error-weighted averaged cumulants are dependent on statistics. Despite this effect, however, it is found that the three sigma rule of thumb is still applicable when the statistics are above one million. Supported by NSFC (11405088, 11521064, 11647093), Major State Basic Research Development Program of China (2014CB845402) and Ministry of Science and Technology (MoST) (2016YFE0104800)

  10. Towards Greenland Glaciation: cumulative or abrupt transition?

    Science.gov (United States)

    Ramstein, Gilles; Tan, Ning; Ladant, Jean-baptiste; Dumas, Christophe; Contoux, Camille

    2017-04-01

    During the mid-Pliocene warming period (3-3.3 Ma BP), the global annual mean temperatures inferred by data and model studies were 2-3° warmer than pre-industrial values. Accordingly, Greenland ice sheet volume is supposed to reach at the most, only half of that of present-day [Haywood et al. 2010]. Around 2.7-2.6 Ma BP, just ˜ 500 kyr after the warming peak of mid-Pliocene, the Greenland ice sheet has reached its full size [Lunt et al. 2008]. A crucial question concerns the evolution of the Greenland ice sheet from half to full size during the 3 - 2.5 Ma period. Data show a decreasing trend of atmospheric CO2 concentration from 3 Ma to 2.5 Ma [Seki et al.2010; Bartoli et al. 2011; Martinez et al. 2015]. However, a recent study [Contoux et al. 2015] suggests that a lowering of CO2 is not sufficient to initiate a perennial glaciation on Greenland and must be combined with low summer insolation to preserve the ice sheet during insolation maxima. This suggests rather a cumulative process than an abrupt event. In order to diagnose the evolution of the ice sheet build-up, we carry on, for the first time, a transient simulation of climate and ice sheet evolutions from 3 Ma to 2.5 Ma. This strategy enables us to investigate the waxing and waning of the ice sheet during several orbital cycles. We use a tri-dimensional interpolation method designed by Ladant et al. (2014), which allows the evolution of CO2 concentration and of orbital parameters, and the evolution of the Greenland ice sheet size to be taken into account. By interpolating climatic snapshot simulations ran with various possible combinations of CO2, orbits and ice sheet sizes, we can build a continuous climatic forcing that is then used to provide 500 kyrs-long ice sheet simulations. With such a tool, we may offer a physically based answer to different CO2 reconstructions scenarios and analyse which one is the most consistent with Greenland ice sheet buildup.

  11. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  12. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  13. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  14. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  15. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  16. Performance analysis and comparison of an Atkinson cycle coupled to variable temperature heat reservoirs under maximum power and maximum power density conditions

    International Nuclear Information System (INIS)

    Wang, P.-Y.; Hou, S.-S.

    2005-01-01

    In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions

  17. Cumulative stress and autonomic dysregulation in a community sample.

    Science.gov (United States)

    Lampert, Rachel; Tuit, Keri; Hong, Kwang-Ik; Donovan, Theresa; Lee, Forrester; Sinha, Rajita

    2016-05-01

    Whether cumulative stress, including both chronic stress and adverse life events, is associated with decreased heart rate variability (HRV), a non-invasive measure of autonomic status which predicts poor cardiovascular outcomes, is unknown. Healthy community dwelling volunteers (N = 157, mean age 29 years) participated in the Cumulative Stress/Adversity Interview (CAI), a 140-item event interview measuring cumulative adversity including major life events, life trauma, recent life events and chronic stressors, and underwent 24-h ambulatory ECG monitoring. HRV was analyzed in the frequency domain and standard deviation of NN intervals (SDNN) calculated. Initial simple regression analyses revealed that total cumulative stress score, chronic stressors and cumulative adverse life events (CALE) were all inversely associated with ultra low-frequency (ULF), very low-frequency (VLF) and low-frequency (LF) power and SDNN (all p accounting for additional appreciable variance. For VLF and LF, both total cumulative stress and chronic stress significantly contributed to the variance alone but were not longer significant after adjusting for race and health behaviors. In summary, total cumulative stress, and its components of adverse life events and chronic stress were associated with decreased cardiac autonomic function as measured by HRV. Findings suggest one potential mechanism by which stress may exert adverse effects on mortality in healthy individuals. Primary preventive strategies including stress management may prove beneficial.

  18. Cumulants in perturbation expansions for non-equilibrium field theory

    International Nuclear Information System (INIS)

    Fauser, R.

    1995-11-01

    The formulation of perturbation expansions for a quantum field theory of strongly interacting systems in a general non-equilibrium state is discussed. Non-vanishing initial correlations are included in the formulation of the perturbation expansion in terms of cumulants. The cumulants are shown to be the suitable candidate for summing up the perturbation expansion. Also a linked-cluster theorem for the perturbation series with cumulants is presented. Finally a generating functional of the perturbation series with initial correlations is studied. We apply the methods to a simple model of a fermion-boson system. (orig.)

  19. Estimating a population cumulative incidence under calendar time trends

    DEFF Research Database (Denmark)

    Hansen, Stefan N; Overgaard, Morten; Andersen, Per K

    2017-01-01

    BACKGROUND: The risk of a disease or psychiatric disorder is frequently measured by the age-specific cumulative incidence. Cumulative incidence estimates are often derived in cohort studies with individuals recruited over calendar time and with the end of follow-up governed by a specific date....... It is common practice to apply the Kaplan-Meier or Aalen-Johansen estimator to the total sample and report either the estimated cumulative incidence curve or just a single point on the curve as a description of the disease risk. METHODS: We argue that, whenever the disease or disorder of interest is influenced...

  20. Higher order net-proton number cumulants dependence on the centrality definition and other spurious effects

    Science.gov (United States)

    Sombun, S.; Steinheimer, J.; Herold, C.; Limphirat, A.; Yan, Y.; Bleicher, M.

    2018-02-01

    We study the dependence of the normalized moments of the net-proton multiplicity distributions on the definition of centrality in relativistic nuclear collisions at a beam energy of \\sqrt{{s}{NN}}=7.7 {GeV}. Using the ultra relativistic quantum molecular dynamics model as event generator we find that the centrality definition has a large effect on the extracted cumulant ratios. Furthermore we find that the finite efficiency for the determination of the centrality introduces an additional systematic uncertainty. Finally, we quantitatively investigate the effects of event-pile up and other possible spurious effects which may change the measured proton number. We find that pile-up alone is not sufficient to describe the data and show that a random double counting of events, adding significantly to the measured proton number, affects mainly the higher order cumulants in most central collisions.

  1. Cumulative risk assessment of phthalate exposure of Danish children and adolescents using the hazard index approach

    DEFF Research Database (Denmark)

    Søeborg, T; Frederiksen, H; Andersson, Anna-Maria

    2012-01-01

    Human risk assessment of chemicals is traditionally presented as the ratio between the actual level of exposure and an acceptable level of exposure, with the acceptable level of exposure most often being estimated by appropriate authorities. This approach is generally sound when assessing the risk...... of individual chemicals. However, several chemicals may concurrently target the same receptor, work through the same mechanism or in other ways induce the same effect(s) in the body. In these cases, cumulative risk assessment should be applied. The present study uses biomonitoring data from 129 Danish children...... and adolescents and resulting estimated daily intakes of four different phthalates. These daily intake estimates are used for a cumulative risk assessment with anti-androgenic effects as the endpoint using Tolerable Daily Intake (TDI) values determined by the European Food Safety Authorities (EFSA) or Reference...

  2. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  3. Cumulative Environmental Impacts: Science and Policy to Protect Communities.

    Science.gov (United States)

    Solomon, Gina M; Morello-Frosch, Rachel; Zeise, Lauren; Faust, John B

    2016-01-01

    Many communities are located near multiple sources of pollution, including current and former industrial sites, major roadways, and agricultural operations. Populations in such locations are predominantly low-income, with a large percentage of minorities and non-English speakers. These communities face challenges that can affect the health of their residents, including limited access to health care, a shortage of grocery stores, poor housing quality, and a lack of parks and open spaces. Environmental exposures may interact with social stressors, thereby worsening health outcomes. Age, genetic characteristics, and preexisting health conditions increase the risk of adverse health effects from exposure to pollutants. There are existing approaches for characterizing cumulative exposures, cumulative risks, and cumulative health impacts. Although such approaches have merit, they also have significant constraints. New developments in exposure monitoring, mapping, toxicology, and epidemiology, especially when informed by community participation, have the potential to advance the science on cumulative impacts and to improve decision making.

  4. Pesticide Cumulative Risk Assessment: Framework for Screening Analysis

    Science.gov (United States)

    This document provides guidance on how to screen groups of pesticides for cumulative evaluation using a two-step approach: begin with evaluation of available toxicological information and, if necessary, follow up with a risk-based screening approach.

  5. Online Scheduling in Manufacturing A Cumulative Delay Approach

    CERN Document Server

    Suwa, Haruhiko

    2013-01-01

    Online scheduling is recognized as the crucial decision-making process of production control at a phase of “being in production" according to the released shop floor schedule. Online scheduling can be also considered as one of key enablers to realize prompt capable-to-promise as well as available-to-promise to customers along with reducing production lead times under recent globalized competitive markets. Online Scheduling in Manufacturing introduces new approaches to online scheduling based on a concept of cumulative delay. The cumulative delay is regarded as consolidated information of uncertainties under a dynamic environment in manufacturing and can be collected constantly without much effort at any points in time during a schedule execution. In this approach, the cumulative delay of the schedule has the important role of a criterion for making a decision whether or not a schedule revision is carried out. The cumulative delay approach to trigger schedule revisions has the following capabilities for the ...

  6. Considering Environmental and Occupational Stressors in Cumulative Risk Assessments

    Science.gov (United States)

    While definitions vary across the global scientific community, cumulative risk assessments (CRAs) typically are described as exhibiting a population focus and analyzing the combined risks posed by multiple stressors. CRAs also may consider risk management alternatives as an anal...

  7. Peer tutors as learning and teaching partners: a cumulative ...

    African Journals Online (AJOL)

    ... paper explores the kinds of development in tutors' thinking and action that are possible when training and development is theoretically informed, coherent, and oriented towards improving practice. Keywords: academic development, academic literacies, cumulative learning, higher education, peer tutoring, writing centres.

  8. CTD Information Guide. Preventing Cumulative Trauma Disorders in the Workplace

    National Research Council Canada - National Science Library

    1992-01-01

    The purpose of this report is to provide Army occupational safety and health (OSH) professionals with a primer that explains the basic principles of ergonomic-hazard recognition for common cumulative trauma disorders...

  9. Cumulative radiation exposure in children with cystic fibrosis.

    LENUS (Irish Health Repository)

    O'Reilly, R

    2010-02-01

    This retrospective study calculated the cumulative radiation dose for children with cystic fibrosis (CF) attending a tertiary CF centre. Information on 77 children with a mean age of 9.5 years, a follow up time of 658 person years and 1757 studies including 1485 chest radiographs, 215 abdominal radiographs and 57 computed tomography (CT) scans, of which 51 were thoracic CT scans, were analysed. The average cumulative radiation dose was 6.2 (0.04-25) mSv per CF patient. Cumulative radiation dose increased with increasing age and number of CT scans and was greater in children who presented with meconium ileus. No correlation was identified between cumulative radiation dose and either lung function or patient microbiology cultures. Radiation carries a risk of malignancy and children are particularly susceptible. Every effort must be made to avoid unnecessary radiation exposure in these patients whose life expectancy is increasing.

  10. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  11. On the duration and intensity of cumulative advantage competitions

    Science.gov (United States)

    Jiang, Bo; Sun, Liyuan; Figueiredo, Daniel R.; Ribeiro, Bruno; Towsley, Don

    2015-11-01

    Network growth can be framed as a competition for edges among nodes in the network. As with various other social and physical systems, skill (fitness) and luck (random chance) act as fundamental forces driving competition dynamics. In the context of networks, cumulative advantage (CA)—the rich-get-richer effect—is seen as a driving principle governing the edge accumulation process. However, competitions coupled with CA exhibit non-trivial behavior and little is formally known about duration and intensity of CA competitions. By isolating two nodes in an ideal CA competition, we provide a mathematical understanding of how CA exacerbates the role of luck in detriment of skill. We show, for instance, that when nodes start with few edges, an early stroke of luck can place the less skilled in the lead for an extremely long period of time, a phenomenon we call ‘struggle of the fittest’. We prove that duration of a simple skill and luck competition model exhibit power-law tails when CA is present, regardless of skill difference, which is in sharp contrast to the exponential tails when fitness is distinct but CA is absent. We also prove that competition intensity is always upper bounded by an exponential tail, irrespective of CA and skills. Thus, CA competitions can be extremely long (infinite mean, depending on fitness ratio) but almost never very intense. The theoretical results are corroborated by extensive numerical simulations. Our findings have important implications to competitions not only among nodes in networks but also in contexts that leverage socio-physical models embodying CA competitions.

  12. On the duration and intensity of cumulative advantage competitions

    International Nuclear Information System (INIS)

    Jiang, Bo; Towsley, Don; Sun, Liyuan; Figueiredo, Daniel R; Ribeiro, Bruno

    2015-01-01

    Network growth can be framed as a competition for edges among nodes in the network. As with various other social and physical systems, skill (fitness) and luck (random chance) act as fundamental forces driving competition dynamics. In the context of networks, cumulative advantage (CA)—the rich-get-richer effect—is seen as a driving principle governing the edge accumulation process. However, competitions coupled with CA exhibit non-trivial behavior and little is formally known about duration and intensity of CA competitions. By isolating two nodes in an ideal CA competition, we provide a mathematical understanding of how CA exacerbates the role of luck in detriment of skill. We show, for instance, that when nodes start with few edges, an early stroke of luck can place the less skilled in the lead for an extremely long period of time, a phenomenon we call ‘struggle of the fittest’. We prove that duration of a simple skill and luck competition model exhibit power-law tails when CA is present, regardless of skill difference, which is in sharp contrast to the exponential tails when fitness is distinct but CA is absent. We also prove that competition intensity is always upper bounded by an exponential tail, irrespective of CA and skills. Thus, CA competitions can be extremely long (infinite mean, depending on fitness ratio) but almost never very intense. The theoretical results are corroborated by extensive numerical simulations. Our findings have important implications to competitions not only among nodes in networks but also in contexts that leverage socio-physical models embodying CA competitions. (paper)

  13. Cumulative query method for influenza surveillance using search engine data.

    Science.gov (United States)

    Seo, Dong-Woo; Jo, Min-Woo; Sohn, Chang Hwan; Shin, Soo-Yong; Lee, JaeHo; Yu, Maengsoo; Kim, Won Young; Lim, Kyoung Soo; Lee, Sang-Il

    2014-12-16

    Internet search queries have become an important data source in syndromic surveillance system. However, there is currently no syndromic surveillance system using Internet search query data in South Korea. The objective of this study was to examine correlations between our cumulative query method and national influenza surveillance data. Our study was based on the local search engine, Daum (approximately 25% market share), and influenza-like illness (ILI) data from the Korea Centers for Disease Control and Prevention. A quota sampling survey was conducted with 200 participants to obtain popular queries. We divided the study period into two sets: Set 1 (the 2009/10 epidemiological year for development set 1 and 2010/11 for validation set 1) and Set 2 (2010/11 for development Set 2 and 2011/12 for validation Set 2). Pearson's correlation coefficients were calculated between the Daum data and the ILI data for the development set. We selected the combined queries for which the correlation coefficients were .7 or higher and listed them in descending order. Then, we created a cumulative query method n representing the number of cumulative combined queries in descending order of the correlation coefficient. In validation set 1, 13 cumulative query methods were applied, and 8 had higher correlation coefficients (min=.916, max=.943) than that of the highest single combined query. Further, 11 of 13 cumulative query methods had an r value of ≥.7, but 4 of 13 combined queries had an r value of ≥.7. In validation set 2, 8 of 15 cumulative query methods showed higher correlation coefficients (min=.975, max=.987) than that of the highest single combined query. All 15 cumulative query methods had an r value of ≥.7, but 6 of 15 combined queries had an r value of ≥.7. Cumulative query method showed relatively higher correlation with national influenza surveillance data than combined queries in the development and validation set.

  14. Steps and pips in the history of the cumulative recorder.

    OpenAIRE

    Lattal, Kennon A

    2004-01-01

    From its inception in the 1930s until very recent times, the cumulative recorder was the most widely used measurement instrument in the experimental analysis of behavior. It was an essential instrument in the discovery and analysis of schedules of reinforcement, providing the first real-time analysis of operant response rates and patterns. This review traces the evolution of the cumulative recorder from Skinner's early modified kymographs through various models developed by Skinner and his co...

  15. Mapping Cumulative Impacts of Human Activities on Marine Ecosystems

    OpenAIRE

    , Seaplan

    2018-01-01

    Given the diversity of human uses and natural resources that converge in coastal waters, the potential independent and cumulative impacts of those uses on marine ecosystems are important to consider during ocean planning. This study was designed to support the development and implementation of the 2009 Massachusetts Ocean Management Plan. Its goal was to estimate and visualize the cumulative impacts of human activities on coastal and marine ecosystems in the state and federal waters off of Ma...

  16. Cumulative occupational shoulder exposures and surgery for subacromial impingement syndrome: a nationwide Danish cohort study.

    Science.gov (United States)

    Dalbøge, Annett; Frost, Poul; Andersen, Johan Hviid; Svendsen, Susanne Wulff

    2014-11-01

    The primary aim was to examine exposure-response relationships between cumulative occupational shoulder exposures and surgery for subacromial impingement syndrome (SIS), and to compare sex-specific exposure-response relationships. The secondary aim was to examine the time window of relevant exposures. We conducted a nationwide register study of all persons born in Denmark (1933-1977), who had at least 5 years of full-time employment. In the follow-up period (2003-2008), we identified first-time events of surgery for SIS. Cumulative exposure estimates for a 10-year exposure time window with a 1-year lag time were obtained by linking occupational codes with a job exposure matrix. The exposure estimates were expressed as, for example, arm-elevation-years in accordance with the pack-year concept of tobacco consumption. We used a multivariable logistic regression technique equivalent to discrete survival analysis. The adjusted OR (ORadj) increased to a maximum of 2.1 for arm-elevation-years, repetition-years and force-years, and to 1.5 for hand-arm-vibration-years. Sex-specific exposure-response relationships were similar for men and women, when assessed using a relative risk scale. The ORadj increased gradually with the number of years contributing to the cumulative exposure estimates. The excess fraction was 24%. Cumulative occupational shoulder exposures carried an increase in risk of surgery for SIS with similar exposure-response curves for men and women. The risk of surgery for SIS increased gradually, when the period of exposure assessment was extended. In the general working population, a substantial fraction of all first-time operations for SIS could be related to occupational exposures. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  17. System for memorizing maximum values

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1992-08-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  18. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  19. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  20. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  1. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  2. Cumulative damage fatigue tests on nuclear reactor Zircaloy-2 fuel tubes at room temperature and 3000C

    International Nuclear Information System (INIS)

    Pandarinathan, P.R.; Vasudevan, P.

    1980-01-01

    Cumulative damage fatigue tests were conducted on the Zircaloy-2 fuel tubes at room temperature and 300 0 C on the modified Moore type, four-point-loaded, deflection-controlled, rotating bending fatigue testing machine. The cumulative cycle ratio at fracture for the Zircaloy-2 fuel tubes was found to depend on the sequence of loading, stress history, number of cycles of application of the pre-stress and the test temperature. A Hi-Lo type fatigue loading was found to be very much damaging at room temperature and this feature was not observed in the tests at 300 0 C. Results indicate significant differences in damage interaction and damage propagation under cumulative damage tests at room temperature and at 300 0 C. Block-loading fatigue tests are suggested as the best method to determine the life-time of Zircaloy-2 fuel tubes under random fatigue loading during their service in the reactor. (orig.)

  3. An analytical model for cumulative infiltration into a dual-permeability media

    Science.gov (United States)

    Peyrard, Xavier; Lassabatere, Laurent; Angulo-Jaramillo, Rafael; Simunek, Jiri

    2010-05-01

    Modeling of water infiltration into the vadose zone is important for better understanding of movement of water-transported contaminants. There is a great need to take into account the soil heterogeneity and, in particular, the presence of macropores or cracks that could generate preferential flow. Several mathematical models have been proposed to describe unsaturated flow through heterogeneous soils. The dual-permeability model assumes that flow is governed by Richards equation in both porous regions (matrix and fractures). Water can be exchanged between the two regions following a first-order rate law. A previous study showed that the influence of the hydraulic conductivity of the matrix/macropore interface had a little influence on cumulative infiltration at the soil surface. As a result, one could consider the surface infiltration for a specific case of no water exchange between the fracture and matrix regions (a case of zero interfacial hydraulic conductivity). In such a case, water infiltration can be considered to be the sum of the cumulative infiltrations into the matrix and the fractures. On the basis of analytical models for each sub domain (matrix and fractures), an analytical model is proposed for the entire dual-porosity system. A sensitivity analysis is performed to characterize the influence of several factors, such as the saturated hydraulic conductivity ratio, the water pressure scale parameter ratio, and the saturated volumetric water content scale ratio, on the total cumulative infiltration. Such an analysis greatly helps in quantifying the impact of macroporosity and fractures on water infiltration, which can be of great interest for hydrological models.

  4. Race, Space, and Cumulative Disadvantage: A Case Study of the Subprime Lending Collapse.

    Science.gov (United States)

    Rugh, Jacob S; Albright, Len; Massey, Douglas S

    2015-05-01

    In this article, we describe how residential segregation and individual racial disparities generate racialized patterns of subprime lending and lead to financial loss among black borrowers in segregated cities. We conceptualize race as a cumulative disadvantage because of its direct and indirect effects on socioeconomic status at the individual and neighborhood levels, with consequences that reverberate across a borrower's life and between generations. Using Baltimore, Maryland as a case study setting, we combine data from reports filed under the Home Mortgage Disclosure Act with additional loan-level data from mortgage-backed securities. We find that race and neighborhood racial segregation are critical factors explaining black disadvantage across successive stages in the process of lending and foreclosure, controlling for differences in borrower credit scores, income, occupancy status, and loan-to-value ratios. We analyze the cumulative cost of predatory lending to black borrowers in terms of reduced disposable income and lost wealth. We find the cost to be substantial. Black borrowers paid an estimated additional 5 to 11 percent in monthly payments and those that completed foreclosure in the sample lost an excess of $2 million in home equity. These costs were magnified in mostly black neighborhoods and in turn heavily concentrated in communities of color. By elucidating the mechanisms that link black segregation to discrimination we demonstrate how processes of cumulative disadvantage continue to undermine black socioeconomic status in the United States today.

  5. Fragmentation of 7-9 GeV/c deuterons into cumulative kaons

    International Nuclear Information System (INIS)

    Afanas'ev, S.V.; Zolin, L.S.; Isupov, A.Yu.; Ladygin, V.P.; Litvinenko, A.G.; Reznikov, S.G.; Khrenov, A.N.

    2013-01-01

    Data of kaon production in the reaction d + Be → K ∓ (0°) + X in the x c cumulative variable region from 0.88 to 1.37 are presented. The x c ≥ 1 values correspond to internucleon distances (the deuteron core region), where the nucleon wave functions begin to overlap forming the hadron cluster ('flucton') with density above the average one of nuclear matter. The behaviour of K + and K - yields ratio in this x c region can be interpreted within the framework of hypothesis of the hard quark sea increasing in nuclei due to flucton component presence in the nuclear matter

  6. Low Birth Weight, Cumulative Obesity Dose, and the Risk of Incident Type 2 Diabetes

    OpenAIRE

    Feng, Cindy; Osgood, Nathaniel D.; Dyck, Roland F.

    2018-01-01

    Background. Obesity history may provide a better understanding of the contribution of obesity to T2DM risk. Methods. 17,634 participants from the 1958 National Child Development Study were followed from birth to 50 years. Cumulative obesity dose, a measure of obesity history, was calculated by subtracting the upper cut-off of the normal BMI from the actual BMI at each follow-up and summing the areas under the obesity dose curve. Hazard ratios (HRs) for diabetes were calculated using Cox regre...

  7. Maximum spectral demands in the near-fault region

    Science.gov (United States)

    Huang, Yin-Nan; Whittaker, Andrew S.; Luco, Nicolas

    2008-01-01

    The Next Generation Attenuation (NGA) relationships for shallow crustal earthquakes in the western United States predict a rotated geometric mean of horizontal spectral demand, termed GMRotI50, and not maximum spectral demand. Differences between strike-normal, strike-parallel, geometric-mean, and maximum spectral demands in the near-fault region are investigated using 147 pairs of records selected from the NGA strong motion database. The selected records are for earthquakes with moment magnitude greater than 6.5 and for closest site-to-fault distance less than 15 km. Ratios of maximum spectral demand to NGA-predicted GMRotI50 for each pair of ground motions are presented. The ratio shows a clear dependence on period and the Somerville directivity parameters. Maximum demands can substantially exceed NGA-predicted GMRotI50 demands in the near-fault region, which has significant implications for seismic design, seismic performance assessment, and the next-generation seismic design maps. Strike-normal spectral demands are a significantly unconservative surrogate for maximum spectral demands for closest distance greater than 3 to 5 km. Scale factors that transform NGA-predicted GMRotI50 to a maximum spectral demand in the near-fault region are proposed.

  8. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  9. Excess Mortality in Treated and Untreated Hyperthyroidism Is Related to Cumulative Periods of Low Serum TSH.

    Science.gov (United States)

    Lillevang-Johansen, Mads; Abrahamsen, Bo; Jørgensen, Henrik Løvendahl; Brix, Thomas Heiberg; Hegedüs, Laszlo

    2017-07-01

    Cumulative time-dependent excess mortality in hyperthyroid patients has been suggested. However, the effect of antithyroid treatment on mortality, especially in subclinical hyperthyroidism, remains unclarified. We investigated the association between hyperthyroidism and mortality in both treated and untreated hyperthyroid individuals. Register-based cohort study of 235,547 individuals who had at least one serum thyroid-stimulating hormone (TSH) measurement in the period 1995 to 2011 (7.3 years median follow-up). Hyperthyroidism was defined as at least two measurements of low serum TSH. Mortality rates for treated and untreated hyperthyroid subjects compared with euthyroid controls were calculated using multivariate Cox regression analyses, controlling for age, sex, and comorbidities. Cumulative periods of decreased serum TSH were analyzed as a time-dependent covariate. Hazard ratio (HR) for mortality was increased in untreated [1.23; 95% confidence interval (CI), 1.12 to 1.37; P hyperthyroid patients. When including cumulative periods of TSH in the Cox regression analyses, HR for mortality per every 6 months of decreased TSH was 1.11 (95% CI, 1.09 to 1.13; P hyperthyroid patients (n = 1137) and 1.13 (95% CI, 1.11 to 1.15; P hyperthyroidism, respectively. Mortality is increased in hyperthyroidism. Cumulative periods of decreased TSH increased mortality in both treated and untreated hyperthyroidism, implying that excess mortality may not be driven by lack of therapy, but rather inability to keep patients euthyroid. Meticulous follow-up during treatment to maintain biochemical euthyroidism may be warranted. Copyright © 2017 by the Endocrine Society

  10. Maintenance hemodialysis patients have high cumulative radiation exposure.

    LENUS (Irish Health Repository)

    Kinsella, Sinead M

    2010-10-01

    Hemodialysis is associated with an increased risk of neoplasms which may result, at least in part, from exposure to ionizing radiation associated with frequent radiographic procedures. In order to estimate the average radiation exposure of those on hemodialysis, we conducted a retrospective study of 100 patients in a university-based dialysis unit followed for a median of 3.4 years. The number and type of radiological procedures were obtained from a central radiology database, and the cumulative effective radiation dose was calculated using standardized, procedure-specific radiation levels. The median annual radiation dose was 6.9 millisieverts (mSv) per patient-year. However, 14 patients had an annual cumulative effective radiation dose over 20 mSv, the upper averaged annual limit for occupational exposure. The median total cumulative effective radiation dose per patient over the study period was 21.7 mSv, in which 13 patients had a total cumulative effective radiation dose over 75 mSv, a value reported to be associated with a 7% increased risk of cancer-related mortality. Two-thirds of the total cumulative effective radiation dose was due to CT scanning. The average radiation exposure was significantly associated with the cause of end-stage renal disease, history of ischemic heart disease, transplant waitlist status, number of in-patient hospital days over follow-up, and death during the study period. These results highlight the substantial exposure to ionizing radiation in hemodialysis patients.

  11. Cumulative nucleon production in 3Hep- and 3Hp interactions at momenta of colliding nuclei 5 GeV/c

    International Nuclear Information System (INIS)

    Abdullin, S.K.; Blinov, A.V.; Vanyushin, I.A.

    1989-01-01

    Inclusive cross sections of cumulative protons produced in 3 Hep- and 3 Hp-interactions under momenta of colliding 5 GeV/c nuclei are investigated. Experimental material is obtained using liquid-hydrogen ITEP chamber-80cm. Under the cumulative proton kinetic energy, exceeding 50 MeV inclusive cross section ratio σ( 3 Hep→pX)/σ( 3 Hp→pX)=1.6±0.1. Within the same energy interval evaluation of yield ratio of cumulative protons and neutrons, produced in 3 Hep -interactions, leads to ∼ 1.6 value. Asymmetry of mean multiple protons escaping forward and backward both in 3 Hep -and in 3 Hp interactions is observed. Averaged invariant distribution functions for cumulative protons and neutrons in 8 Hep-interactions and protons in 3 Hp-interactions are constructed (the averaging is performed within 90-180 deg and 120-180 deg intervals). To temperatures of such distributions are found. 43 refs.; 1 fig

  12. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  13. Parametric optimization of thermoelectric elements footprint for maximum power generation

    DEFF Research Database (Denmark)

    Rezania, A.; Rosendahl, Lasse; Yin, Hao

    2014-01-01

    The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost......-performance, and variation of efficiency in the uni-couple over a wide range of the heat transfer coefficient on the cold junction. The three-dimensional (3D) governing equations of the thermoelectricity and the heat transfer are solved using the finite element method (FEM) for temperature dependent properties of TE...... materials. The results, which are in good agreement with the previous computational studies, show that the maximum power generation and the maximum cost-performance in the module occur at An/Ap

  14. Post optimization paradigm in maximum 3-satisfiability logic programming

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.

  15. Cumulative Trauma Among Mayas Living in Southeast Florida.

    Science.gov (United States)

    Millender, Eugenia I; Lowe, John

    2017-06-01

    Mayas, having experienced genocide, exile, and severe poverty, are at high risk for the consequences of cumulative trauma that continually resurfaces through current fear of an uncertain future. Little is known about the mental health and alcohol use status of this population. This correlational study explored t/he relationship of cumulative trauma as it relates to social determinants of health (years in the United States, education, health insurance status, marital status, and employment), psychological health (depression symptoms), and health behaviors (alcohol use) of 102 Guatemalan Mayas living in Southeast Florida. The results of this study indicated that, as specific social determinants of health and cumulative trauma increased, depression symptoms (particularly among women) and the risk for harmful alcohol use (particularly among men) increased. Identifying risk factors at an early stage before serious disease or problems are manifest provides room for early screening leading to early identification, early treatment, and better outcomes.

  16. Session: What do we know about cumulative or population impacts

    Energy Technology Data Exchange (ETDEWEB)

    Kerlinger, Paul; Manville, Al; Kendall, Bill

    2004-09-01

    This session at the Wind Energy and Birds/Bats workshop consisted of a panel discussion followed by a discussion/question and answer period. The panelists were Paul Kerlinger, Curry and Kerlinger, LLC, Al Manville, U.S. Fish and Wildlife Service, and Bill Kendall, US Geological Service. The panel addressed the potential cumulative impacts of wind turbines on bird and bat populations over time. Panel members gave brief presentations that touched on what is currently known, what laws apply, and the usefulness of population modeling. Topics addressed included which sources of modeling should be included in cumulative impacts, comparison of impacts from different modes of energy generation, as well as what research is still needed regarding cumulative impacts of wind energy development on bird and bat populations.

  17. Estimating a population cumulative incidence under calendar time trends

    DEFF Research Database (Denmark)

    Hansen, Stefan N; Overgaard, Morten; Andersen, Per K

    2017-01-01

    BACKGROUND: The risk of a disease or psychiatric disorder is frequently measured by the age-specific cumulative incidence. Cumulative incidence estimates are often derived in cohort studies with individuals recruited over calendar time and with the end of follow-up governed by a specific date...... by calendar time trends, the total sample Kaplan-Meier and Aalen-Johansen estimators do not provide useful estimates of the general risk in the target population. We present some alternatives to this type of analysis. RESULTS: We show how a proportional hazards model may be used to extrapolate disease risk...... estimates if proportionality is a reasonable assumption. If not reasonable, we instead advocate that a more useful description of the disease risk lies in the age-specific cumulative incidence curves across strata given by time of entry or perhaps just the end of follow-up estimates across all strata...

  18. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  19. Baltic Sea biodiversity status vs. cumulative human pressures

    DEFF Research Database (Denmark)

    Andersen, Jesper H.; Halpern, Benjamin S.; Korpinen, Samuli

    2015-01-01

    Abstract Many studies have tried to explain spatial and temporal variations in biodiversity status of marine areas from a single-issue perspective, such as fishing pressure or coastal pollution, yet most continental seas experience a wide range of human pressures. Cumulative impact assessments have...... been developed to capture the consequences of multiple stressors for biodiversity, but the ability of these assessments to accurately predict biodiversity status has never been tested or ground-truthed. This relationship has similarly been assumed for the Baltic Sea, especially in areas with impaired...... status, but has also never been documented. Here we provide a first tentative indication that cumulative human impacts relate to ecosystem condition, i.e. biodiversity status, in the Baltic Sea. Thus, cumulative impact assessments offer a promising tool for informed marine spatial planning, designation...

  20. Cumulative carbon as a policy framework for achieving climate stabilization

    Science.gov (United States)

    Matthews, H. Damon; Solomon, Susan; Pierrehumbert, Raymond

    2012-01-01

    The primary objective of the United Nations Framework Convention on Climate Change is to stabilize greenhouse gas concentrations at a level that will avoid dangerous climate impacts. However, greenhouse gas concentration stabilization is an awkward framework within which to assess dangerous climate change on account of the significant lag between a given concentration level and the eventual equilibrium temperature change. By contrast, recent research has shown that global temperature change can be well described by a given cumulative carbon emissions budget. Here, we propose that cumulative carbon emissions represent an alternative framework that is applicable both as a tool for climate mitigation as well as for the assessment of potential climate impacts. We show first that both atmospheric CO2 concentration at a given year and the associated temperature change are generally associated with a unique cumulative carbon emissions budget that is largely independent of the emissions scenario. The rate of global temperature change can therefore be related to first order to the rate of increase of cumulative carbon emissions. However, transient warming over the next century will also be strongly affected by emissions of shorter lived forcing agents such as aerosols and methane. Non-CO2 emissions therefore contribute to uncertainty in the cumulative carbon budget associated with near-term temperature targets, and may suggest the need for a mitigation approach that considers separately short- and long-lived gas emissions. By contrast, long-term temperature change remains primarily associated with total cumulative carbon emissions owing to the much longer atmospheric residence time of CO2 relative to other major climate forcing agents. PMID:22869803

  1. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  2. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  3. The role of factorial cumulants in reactor neutron noise theory

    International Nuclear Information System (INIS)

    Colombino, A.; Pacilio, N.; Sena, G.

    1979-01-01

    The physical meaning and the combinatorial implications of the factorial cumulant of a state variable such as the number of neutrons or the number of neutron counts are specified. Features of the presentation are: (1) the fission process is treated in its entirety without the customary binary emission restriction, (b) the introduction of the factorial cumulants helps in reducing the complexity of the mathematical problems, (c) all the solutions can be obtained analytically. Only the ergodic hypothesis for the neutron population evolution is dealt with. (author)

  4. Super-Resolution Algorithm in Cumulative Virtual Blanking

    Science.gov (United States)

    Montillet, J. P.; Meng, X.; Roberts, G. W.; Woolfson, M. S.

    2008-11-01

    The proliferation of mobile devices and the emergence of wireless location-based services have generated consumer demand for precise location. In this paper, the MUSIC super-resolution algorithm is applied to time delay estimation for positioning purposes in cellular networks. The goal is to position a Mobile Station with UMTS technology. The problem of Base-Stations herability is solved using Cumulative Virtual Blanking. A simple simulator is presented using DS-SS signal. The results show that MUSIC algorithm improves the time delay estimation in both the cases whether or not Cumulative Virtual Blanking was carried out.

  5. Is the maximum permissible radiation burden for the population indeed permissible

    International Nuclear Information System (INIS)

    Renesse, R.L. van.

    1975-01-01

    It is argued that legislation based on the ICRP doses will, under economical influences, lead to a situation where the population is exposed to radiation doses near the maximum permissible dose. Due to cumulative radiation effects, this will introduce unacceptable health risks. Therefore, it will be necessary to lower the legal dose limit of 170 millrem per year per person by a factor 10 to 20

  6. Aspect Ratio Dependence of Impact Fragmentation

    International Nuclear Information System (INIS)

    Inaoka, H.; Toyosawa, E.; Takayasu, H.; Inaoka, H.

    1997-01-01

    A numerical model of three-dimensional impact fragmentation produces a power-law cumulative fragment mass distribution followed by a flat tail. The result is consistent with an experimental result in a recent paper by Meibom and Balslev [Phys. Rev. Lett. 76, 2492 (1996)]. Our numerical simulation also implies that the fragment mass distribution changes from a power law with a flat tail to a power law with a sudden cutoff, depending on the aspect ratio of the fractured object. copyright 1997 The American Physical Society

  7. Association of cumulative dose of haloperidol with next-day delirium in older medical ICU patients.

    Science.gov (United States)

    Pisani, Margaret A; Araujo, Katy L B; Murphy, Terrence E

    2015-05-01

    To evaluate the association between cumulative dose of haloperidol and next-day diagnosis of delirium in a cohort of older medical ICU patients, with adjustment for its time-dependent confounding with fentanyl and intubation. Prospective, observational study. Medical ICU at an urban, academic medical center. Age 60 years and older admitted to the medical ICU who received at least one dose of haloperidol (n = 93). Of these, 72 patients were intubated at some point in their medical ICU stay, whereas 21 were never intubated. None. Detailed data were collected concerning time, dosage, route of administration of all medications, as well as for important clinical covariates, and daily status of intubation and delirium using the confusion assessment method for the ICU and a chart-based algorithm. Among nonintubated patients, and after adjustment for time-dependent confounding and important covariates, each additional cumulative milligram of haloperidol was associated with 5% higher odds of next-day delirium with odds ratio of 1.05 (credible interval [CI], 1.02-1.09). After adjustment for time-dependent confounding and covariates, intubation was associated with a five-fold increase in odds of next-day delirium with odds ratio of 5.66 (CI, 2.70-12.02). Cumulative dose of haloperidol among intubated patients did not change their already high likelihood of next-day delirium. After adjustment for time-dependent confounding, the positive associations between indicators of intubation and of cognitive impairment and next-day delirium became stronger. These results emphasize the need for more studies regarding the efficacy of haloperidol for treatment of delirium among older medical ICU patients and demonstrate the value of assessing nonintubated patients.

  8. Order effect of strain applications in low-cycle cumulative fatigue at high temperatures

    International Nuclear Information System (INIS)

    Bui-Quoc, T.; Biron, A.

    1977-01-01

    Recent test results on cumulative damage with two strain levels on a stainless steel (AISI 304) at room temperature, 537 and 650 0 C show that the sum of cycle-ratios can be significantly smaller than unity for decreasing levels; the opposite has been noted for increasing levels. As a consequence, the use of the linear damage rule (Miner's law) for life predictions is not conservative in many cases. Since the double linear damage rule (DLDR), originally developed by Manson et al. for room temperature applications, takes the order effect of cyclic loading into consideration, an extension of this rule for high temperature cases may be a potentially useful tool. The present paper is concerned with such an extension. For cumulative damage tests with several levels, according to the DLDR, the summation is applied separately for crack initiation and crack propagation stages, and failure is then assumed to occur when the sum is equal to unity for both stages. Application of the DLDR consists in determining the crack propagation stage Nsub(p) associated with a particular number of cycles at failure N, i.e. Nsub(p)=PNsup(a) where exponent a and coefficient P had been assumed to be equal to 0.6 and 14 respectively for several materials at room temperature. When the DLDR is applied (with a=0.6 and P=14) to predict the remaining life at the second strain level (for two-level cumulative damage) for 304 stainless steel at room temperature 537 0 C and 650 0 C, the results show that the damage due to the first strain level is over-emphasized for decreasing levels when the damaging cycle-ratio is small. For increasing levels, the damage is underestimated and in some testing conditions this damage is simply ignored

  9. Residual, direct and cumulative effect of zinc application on wheat and rice yield under rice-wheat syst

    Directory of Open Access Journals (Sweden)

    R. Khan

    2009-05-01

    Full Text Available Zinc (Zn deficiency is prevalent particularly on calcareous soils of arid and semiarid region. A field experiment was conducted to investigate the direct, residual and cumulative effect of zinc on the yield of wheat and rice in permanent layout for two consecutive years, 2004-05 and 2005-06 at Arid Zone Research Institute D.I. Khan. Soil under study was deficient in Zn (0.8 mg kg-1. Effect of Zn on yield, Zn concentrations in leaf and soils were assessed using wheat variety Naseer-2000 and rice variety IRRI-6. Three rates of Zn, ranging from 0 to 10 kg ha-1 in soil, were applied as zinc sulphate (ZnSO4. 7H2O along with basal dose fertilization of nitrogen, phosphorus and potassium. Mature leaf and soil samples were collected at panicle initiation stage. The results showed that grain yield of wheat and rice was significantly increased by the direct application of 5 and 10 kg Zn ha-1. Highest grain yield of wheat (5467 kg ha-1 was recorded with the direct application of 10 kg Zn ha-1 while 4994 kg ha-1 was recorded with the cumulative application of 10 kg Zn ha-1 but the yield increase due to residual effect of Zn was statistically lower than the cumulative effect of Zn. Maximum paddy yield was recorded with the cumulative application ofZn followed by residual and direct applied 10 and 5 kg Zn kg ha-1, respectively. Zn concentration in soils ranged from 0.3 to 1.5 mg kg-1 in wheat and 0.24 to 2.40 mg kg-1 in rice, while in leaves it ranged from 18-48 mg kg-1 in wheat and 15-52 mg kg-1 in rice. The concentration of Zn in soil and leaves increased due to the treatments in the order; cumulative > residual > direct effect > control (without Zn. The yield attributes like 1000- grain weight, number of spikes, spike length and plant height were increased by the residual, direct and cumulative effect of Zn levels; however, the magnitude of increase was higher in cumulative effect than residual and direct effect of Zn, respectively. Under Zn-deficient soil

  10. Accurate convolution/superposition for multi-resolution dose calculation using cumulative tabulated kernels

    International Nuclear Information System (INIS)

    Lu Weiguo; Olivera, Gustavo H; Chen Mingli; Reckwerdt, Paul J; Mackie, Thomas R

    2005-01-01

    Convolution/superposition (C/S) is regarded as the standard dose calculation method in most modern radiotherapy treatment planning systems. Different implementations of C/S could result in significantly different dose distributions. This paper addresses two major implementation issues associated with collapsed cone C/S: one is how to utilize the tabulated kernels instead of analytical parametrizations and the other is how to deal with voxel size effects. Three methods that utilize the tabulated kernels are presented in this paper. These methods differ in the effective kernels used: the differential kernel (DK), the cumulative kernel (CK) or the cumulative-cumulative kernel (CCK). They result in slightly different computation times but significantly different voxel size effects. Both simulated and real multi-resolution dose calculations are presented. For simulation tests, we use arbitrary kernels and various voxel sizes with a homogeneous phantom, and assume forward energy transportation only. Simulations with voxel size up to 1 cm show that the CCK algorithm has errors within 0.1% of the maximum gold standard dose. Real dose calculations use a heterogeneous slab phantom, both the 'broad' (5 x 5 cm 2 ) and the 'narrow' (1.2 x 1.2 cm 2 ) tomotherapy beams. Various voxel sizes (0.5 mm, 1 mm, 2 mm, 4 mm and 8 mm) are used for dose calculations. The results show that all three algorithms have negligible difference (0.1%) for the dose calculation in the fine resolution (0.5 mm voxels). But differences become significant when the voxel size increases. As for the DK or CK algorithm in the broad (narrow) beam dose calculation, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 10% (7%) of the maximum dose. As for the broad (narrow) beam dose calculation using the CCK algorithm, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 1% of the maximum dose. Among all three methods, the CCK algorithm

  11. Cumulative effects of wind turbines. Volume 3: Report on results of consultations on cumulative effects of wind turbines on birds

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    This report gives details of the consultations held in developing the consensus approach taken in assessing the cumulative effects of wind turbines. Contributions on bird issues, and views of stakeholders, the Countryside Council for Wales, electric utilities, Scottish Natural Heritage, and the National Wind Power Association are reported. The scoping of key species groups, where cumulative effects might be expected, consideration of other developments, the significance of any adverse effects, mitigation, regional capacity assessments, and predictive models are discussed. Topics considered at two stakeholder workshops are outlined in the appendices.

  12. Cumulative impacts: current research and current opinions at PSW

    Science.gov (United States)

    R. M. Rice

    1987-01-01

    Consideration of cumulative watershed effects (CWEs) has both political and physical aspects. Regardless of the practical usefulness of present methods of dealing with CWEs, the legal requirement to address them remains. Management of federal land is regulated by the National Environmental Policy Act (NEPA) and the Federal Water Pollution Control Act of 1972. The...

  13. Cumulative Risks of Foster Care Placement for Danish Children

    DEFF Research Database (Denmark)

    Fallesen, Peter; Emanuel, Natalia; Wildeman, Christopher

    2014-01-01

    children. Our results also show some variations by parental ethnicity and sex, but these differences are small. Indeed, they appear quite muted relative to racial/ethnic differences in these risks in the United States. Last, though cumulative risks are similar between Danish and American children...

  14. Disintegration of a profiled shock wave at the cumulation point

    International Nuclear Information System (INIS)

    Kaliski, S.

    1978-01-01

    The disintegration at the cumulation point is analyzed of a shock wave generated with the aid of a profiled pressure. The quantitative relations are analyzed for the disintegration waves for typical compression parameters in systems of thermonuclear microfusion. The quantitative conclusions are drawn for the application of simplifying approximate calculations in problems of microfusion. (author)

  15. Cumulative Prospect Theory, Option Returns, and the Variance Premium

    NARCIS (Netherlands)

    Baele, Lieven; Driessen, Joost; Ebert, Sebastian; Londono Yarce, J.M.; Spalt, Oliver

    The variance premium and the pricing of out-of-the-money (OTM) equity index options are major challenges to standard asset pricing models. We develop a tractable equilibrium model with Cumulative Prospect Theory (CPT) preferences that can overcome both challenges. The key insight is that the

  16. Steps and Pips in the History of the Cumulative Recorder

    Science.gov (United States)

    Lattal, Kennon A.

    2004-01-01

    From its inception in the 1930s until very recent times, the cumulative recorder was the most widely used measurement instrument in the experimental analysis of behavior. It was an essential instrument in the discovery and analysis of schedules of reinforcement, providing the first real-time analysis of operant response rates and patterns. This…

  17. The effects of cumulative practice on mathematics problem solving.

    Science.gov (United States)

    Mayfield, Kristin H; Chase, Philip N

    2002-01-01

    This study compared three different methods of teaching five basic algebra rules to college students. All methods used the same procedures to teach the rules and included four 50-question review sessions interspersed among the training of the individual rules. The differences among methods involved the kinds of practice provided during the four review sessions. Participants who received cumulative practice answered 50 questions covering a mix of the rules learned prior to each review session. Participants who received a simple review answered 50 questions on one previously trained rule. Participants who received extra practice answered 50 extra questions on the rule they had just learned. Tests administered after each review included new questions for applying each rule (application items) and problems that required novel combinations of the rules (problem-solving items). On the final test, the cumulative group outscored the other groups on application and problem-solving items. In addition, the cumulative group solved the problem-solving items significantly faster than the other groups. These results suggest that cumulative practice of component skills is an effective method of training problem solving.

  18. Anti-irritants II: Efficacy against cumulative irritation

    DEFF Research Database (Denmark)

    Andersen, Flemming; Hedegaard, Kathryn; Petersen, Thomas Kongstad

    2006-01-01

    window of opportunity in which to demonstrate efficacy. Therefore, the effect of AI was studied in a cumulative irritation model by inducing irritant dermatitis with 10 min daily exposures for 5+4 days (no irritation on weekend) to 1% sodium lauryl sulfate on the right and 20% nonanoic acid on the left...

  19. Cumulative Beam Breakup with Time-Dependent Parameters

    CERN Document Server

    Delayen, J R

    2004-01-01

    A general analytical formalism developed recently for cumulative beam breakup (BBU) in linear accelerators with arbitrary beam current profile and misalignments [1] is extended to include time-dependent parameters such as energy chirp or rf focusing in order to reduce BBU-induced instabilities and emittance growth. Analytical results are presented and applied to practical accelerator configurations.

  20. On the mechanism of hadron cumulative production on nucleus

    International Nuclear Information System (INIS)

    Efremov, A.V.

    1976-01-01

    A mechanism of cumulative production of hadrons on nucleus is proposed which is similar to that of high perpendicular hadron production. The cross section obtained describes the main qualitative features of such prosesses, e.g., initial energy dependence atomic number behaviour, dependence on the rest mass of the produced particle and its production angle

  1. Hyperscaling breakdown and Ising spin glasses: The Binder cumulant

    Science.gov (United States)

    Lundow, P. H.; Campbell, I. A.

    2018-02-01

    Among the Renormalization Group Theory scaling rules relating critical exponents, there are hyperscaling rules involving the dimension of the system. It is well known that in Ising models hyperscaling breaks down above the upper critical dimension. It was shown by Schwartz (1991) that the standard Josephson hyperscaling rule can also break down in Ising systems with quenched random interactions. A related Renormalization Group Theory hyperscaling rule links the critical exponents for the normalized Binder cumulant and the correlation length in the thermodynamic limit. An appropriate scaling approach for analyzing measurements from criticality to infinite temperature is first outlined. Numerical data on the scaling of the normalized correlation length and the normalized Binder cumulant are shown for the canonical Ising ferromagnet model in dimension three where hyperscaling holds, for the Ising ferromagnet in dimension five (so above the upper critical dimension) where hyperscaling breaks down, and then for Ising spin glass models in dimension three where the quenched interactions are random. For the Ising spin glasses there is a breakdown of the normalized Binder cumulant hyperscaling relation in the thermodynamic limit regime, with a return to size independent Binder cumulant values in the finite-size scaling regime around the critical region.

  2. How to manage the cumulative flood safety of catchment dams ...

    African Journals Online (AJOL)

    Dam safety is a significant issue being taken seriously worldwide. However, in Australia, although much attention is being devoted to the medium- to large-scale dams, minimal attention is being paid to the serious potential problems associated with smaller dams, particularly the potential cumulative safety threats they pose ...

  3. Cumulative Beam Breakup due to Resistive-Wall Wake

    International Nuclear Information System (INIS)

    Wang, J.-M.

    2004-01-01

    The cumulative beam breakup problem excited by the resistive-wall wake is formulated. An approximate analytic method of finding the asymptotic behavior for the transverse bunch displacement is developed and solved. Comparison between the asymptotic analytical expression and the direct numerical solution is presented. Good agreement is found. The criterion of using the asymptotic analytical expression is discussed

  4. Analysis of sensory ratings data with cumulative link models

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen; Brockhoff, Per B.

    2013-01-01

    Examples of categorical rating scales include discrete preference, liking and hedonic rating scales. Data obtained on these scales are often analyzed with normal linear regression methods or with omnibus Pearson chi2 tests. In this paper we propose to use cumulative link models that allow for reg...

  5. Tests of Cumulative Prospect Theory with graphical displays of probability

    Directory of Open Access Journals (Sweden)

    Michael H. Birnbaum

    2008-10-01

    Full Text Available Recent research reported evidence that contradicts cumulative prospect theory and the priority heuristic. The same body of research also violates two editing principles of original prospect theory: cancellation (the principle that people delete any attribute that is the same in both alternatives before deciding between them and combination (the principle that people combine branches leading to the same consequence by adding their probabilities. This study was designed to replicate previous results and to test whether the violations of cumulative prospect theory might be eliminated or reduced by using formats for presentation of risky gambles in which cancellation and combination could be facilitated visually. Contrary to the idea that decision behavior contradicting cumulative prospect theory and the priority heuristic would be altered by use of these formats, however, data with two new graphical formats as well as fresh replication data continued to show the patterns of evidence that violate cumulative prospect theory, the priority heuristic, and the editing principles of combination and cancellation. Systematic violations of restricted branch independence also contradicted predictions of ``stripped'' prospect theory (subjectively weighted additive utility without the editing rules.

  6. Implications of applying cumulative risk assessment to the workplace.

    Science.gov (United States)

    Fox, Mary A; Spicer, Kristen; Chosewood, L Casey; Susi, Pam; Johns, Douglas O; Dotson, G Scott

    2018-06-01

    Multiple changes are influencing work, workplaces and workers in the US including shifts in the main types of work and the rise of the 'gig' economy. Work and workplace changes have coincided with a decline in unions and associated advocacy for improved safety and health conditions. Risk assessment has been the primary method to inform occupational and environmental health policy and management for many types of hazards. Although often focused on one hazard at a time, risk assessment frameworks and methods have advanced toward cumulative risk assessment recognizing that exposure to a single chemical or non-chemical stressor rarely occurs in isolation. We explore how applying cumulative risk approaches may change the roles of workers and employers as they pursue improved health and safety and elucidate some of the challenges and opportunities that might arise. Application of cumulative risk assessment should result in better understanding of complex exposures and health risks with the potential to inform more effective controls and improved safety and health risk management overall. Roles and responsibilities of both employers and workers are anticipated to change with potential for a greater burden of responsibility on workers to address risk factors both inside and outside the workplace that affect health at work. A range of policies, guidance and training have helped develop cumulative risk assessment for the environmental health field and similar approaches are available to foster the practice in occupational safety and health. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Hierarchical Bayesian parameter estimation for cumulative prospect theory

    NARCIS (Netherlands)

    Nilsson, H.; Rieskamp, J.; Wagenmakers, E.-J.

    2011-01-01

    Cumulative prospect theory (CPT Tversky & Kahneman, 1992) has provided one of the most influential accounts of how people make decisions under risk. CPT is a formal model with parameters that quantify psychological processes such as loss aversion, subjective values of gains and losses, and

  8. An Axiomatization of Cumulative Prospect Theory for Decision under Risk

    NARCIS (Netherlands)

    Wakker, P.P.; Chateauneuf, A.

    1999-01-01

    Cumulative prospect theory was introduced by Tversky and Kahneman so as to combine the empirical realism of their original prospect theory with the theoretical advantages of Quiggin's rank-dependent utility. Preference axiomatizations were provided in several papers. All those axiomatizations,

  9. Cumulative assessment: does it improve students’ knowledge acquisition and retention?

    NARCIS (Netherlands)

    Cecilio Fernandes, Dario; Nagtegaal, Manouk; Noordzij, Gera; Tio, Rene

    2017-01-01

    Introduction Assessment for learning means changing students’ behaviour regarding their learning. Cumulative assessment has been shown to increase students’ self-study time and spread their study time throughout a course. However, there was no difference regarding students’ knowledge at the end of

  10. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  11. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  12. Comprehensive performance analyses and optimization of the irreversible thermodynamic cycle engines (TCE) under maximum power (MP) and maximum power density (MPD) conditions

    International Nuclear Information System (INIS)

    Gonca, Guven; Sahin, Bahri; Ust, Yasin; Parlak, Adnan

    2015-01-01

    This paper presents comprehensive performance analyses and comparisons for air-standard irreversible thermodynamic cycle engines (TCE) based on the power output, power density, thermal efficiency, maximum dimensionless power output (MP), maximum dimensionless power density (MPD) and maximum thermal efficiency (MEF) criteria. Internal irreversibility of the cycles occurred during the irreversible-adiabatic processes is considered by using isentropic efficiencies of compression and expansion processes. The performances of the cycles are obtained by using engine design parameters such as isentropic temperature ratio of the compression process, pressure ratio, stroke ratio, cut-off ratio, Miller cycle ratio, exhaust temperature ratio, cycle temperature ratio and cycle pressure ratio. The effects of engine design parameters on the maximum and optimal performances are investigated. - Highlights: • Performance analyses are conducted for irreversible thermodynamic cycle engines. • Comprehensive computations are performed. • Maximum and optimum performances of the engines are shown. • The effects of design parameters on performance and power density are examined. • The results obtained may be guidelines to the engine designers

  13. Financial Key Ratios

    OpenAIRE

    Tănase Alin-Eliodor

    2014-01-01

    This article focuses on computing techniques starting from trial balance data regarding financial key ratios. There are presented activity, liquidity, solvency and profitability financial key ratios. It is presented a computing methodology in three steps based on a trial balance.

  14. The challenges and opportunities in cumulative effects assessment

    Energy Technology Data Exchange (ETDEWEB)

    Foley, Melissa M., E-mail: mfoley@usgs.gov [U.S. Geological Survey, Pacific Coastal and Marine Science Center, 400 Natural Bridges, Dr., Santa Cruz, CA 95060 (United States); Center for Ocean Solutions, Stanford University, 99 Pacific St., Monterey, CA 93940 (United States); Mease, Lindley A., E-mail: lamease@stanford.edu [Center for Ocean Solutions, Stanford University, 473 Via Ortega, Stanford, CA 94305 (United States); Martone, Rebecca G., E-mail: rmartone@stanford.edu [Center for Ocean Solutions, Stanford University, 99 Pacific St., Monterey, CA 93940 (United States); Prahler, Erin E. [Center for Ocean Solutions, Stanford University, 473 Via Ortega, Stanford, CA 94305 (United States); Morrison, Tiffany H., E-mail: tiffany.morrison@jcu.edu.au [ARC Centre of Excellence for Coral Reef Studies, James Cook University, Townsville, QLD, 4811 (Australia); Murray, Cathryn Clarke, E-mail: cmurray@pices.int [WWF-Canada, 409 Granville Street, Suite 1588, Vancouver, BC V6C 1T2 (Canada); Wojcik, Deborah, E-mail: deb.wojcik@duke.edu [Nicholas School for the Environment, Duke University, 9 Circuit Dr., Durham, NC 27708 (United States)

    2017-01-15

    The cumulative effects of increasing human use of the ocean and coastal zone have contributed to a rapid decline in ocean and coastal resources. As a result, scientists are investigating how multiple, overlapping stressors accumulate in the environment and impact ecosystems. These investigations are the foundation for the development of new tools that account for and predict cumulative effects in order to more adequately prevent or mitigate negative effects. Despite scientific advances, legal requirements, and management guidance, those who conduct assessments—including resource managers, agency staff, and consultants—continue to struggle to thoroughly evaluate cumulative effects, particularly as part of the environmental assessment process. Even though 45 years have passed since the United States National Environmental Policy Act was enacted, which set a precedent for environmental assessment around the world, defining impacts, baseline, scale, and significance are still major challenges associated with assessing cumulative effects. In addition, we know little about how practitioners tackle these challenges or how assessment aligns with current scientific recommendations. To shed more light on these challenges and gaps, we undertook a comparative study on how cumulative effects assessment (CEA) is conducted by practitioners operating under some of the most well-developed environmental laws around the globe: California, USA; British Columbia, Canada; Queensland, Australia; and New Zealand. We found that practitioners used a broad and varied definition of impact for CEA, which led to differences in how baseline, scale, and significance were determined. We also found that practice and science are not closely aligned and, as such, we highlight opportunities for managers, policy makers, practitioners, and scientists to improve environmental assessment.

  15. The challenges and opportunities in cumulative effects assessment

    International Nuclear Information System (INIS)

    Foley, Melissa M.; Mease, Lindley A.; Martone, Rebecca G.; Prahler, Erin E.; Morrison, Tiffany H.; Murray, Cathryn Clarke; Wojcik, Deborah

    2017-01-01

    The cumulative effects of increasing human use of the ocean and coastal zone have contributed to a rapid decline in ocean and coastal resources. As a result, scientists are investigating how multiple, overlapping stressors accumulate in the environment and impact ecosystems. These investigations are the foundation for the development of new tools that account for and predict cumulative effects in order to more adequately prevent or mitigate negative effects. Despite scientific advances, legal requirements, and management guidance, those who conduct assessments—including resource managers, agency staff, and consultants—continue to struggle to thoroughly evaluate cumulative effects, particularly as part of the environmental assessment process. Even though 45 years have passed since the United States National Environmental Policy Act was enacted, which set a precedent for environmental assessment around the world, defining impacts, baseline, scale, and significance are still major challenges associated with assessing cumulative effects. In addition, we know little about how practitioners tackle these challenges or how assessment aligns with current scientific recommendations. To shed more light on these challenges and gaps, we undertook a comparative study on how cumulative effects assessment (CEA) is conducted by practitioners operating under some of the most well-developed environmental laws around the globe: California, USA; British Columbia, Canada; Queensland, Australia; and New Zealand. We found that practitioners used a broad and varied definition of impact for CEA, which led to differences in how baseline, scale, and significance were determined. We also found that practice and science are not closely aligned and, as such, we highlight opportunities for managers, policy makers, practitioners, and scientists to improve environmental assessment.

  16. The challenges and opportunities in cumulative effects assessment

    Science.gov (United States)

    Foley, Melissa M.; Mease, Lindley A; Martone, Rebecca G; Prahler, Erin E; Morrison, Tiffany H; Clarke Murray, Cathryn; Wojcik, Deborah

    2016-01-01

    The cumulative effects of increasing human use of the ocean and coastal zone have contributed to a rapid decline in ocean and coastal resources. As a result, scientists are investigating how multiple, overlapping stressors accumulate in the environment and impact ecosystems. These investigations are the foundation for the development of new tools that account for and predict cumulative effects in order to more adequately prevent or mitigate negative effects. Despite scientific advances, legal requirements, and management guidance, those who conduct assessments—including resource managers, agency staff, and consultants—continue to struggle to thoroughly evaluate cumulative effects, particularly as part of the environmental assessment process. Even though 45 years have passed since the United States National Environmental Policy Act was enacted, which set a precedent for environmental assessment around the world, defining impacts, baseline, scale, and significance are still major challenges associated with assessing cumulative effects. In addition, we know little about how practitioners tackle these challenges or how assessment aligns with current scientific recommendations. To shed more light on these challenges and gaps, we undertook a comparative study on how cumulative effects assessment (CEA) is conducted by practitioners operating under some of the most well-developed environmental laws around the globe: California, USA; British Columbia, Canada; Queensland, Australia; and New Zealand. We found that practitioners used a broad and varied definition of impact for CEA, which led to differences in how baseline, scale, and significance were determined. We also found that practice and science are not closely aligned and, as such, we highlight opportunities for managers, policy makers, practitioners, and scientists to improve environmental assessment.

  17. Spatial dispersion modeling of 90Sr by point cumulative semivariogram at Keban Dam Lake, Turkey

    International Nuclear Information System (INIS)

    Kuelahci, Fatih; Sen, Zekai

    2007-01-01

    Spatial analysis of 90 Sr artificial radionuclide in consequence of global fallout and Chernobyl nuclear accident has been carried out by using the point cumulative semivariogram (PCSV) technique based on 40 surface water station measurements in Keban Dam Lake during March, April, and May 2006. This technique is a convenient tool in obtaining the regional variability features around each sampling point, which yields the structural effects also in the vicinity of the same point. It presents the regional effect of all the other sites within the study area on the site concerned. In order to see to change of 90 Sr, the five models are constituted. Additionally, it provides a measure of cumulative similarity of the regional variable, 90 Sr, around any measurement site and hence it is possible to draw regional similarity maps at any desired distance around each station. In this paper, such similarity maps are also drawn for a set of distances. 90 Sr activities in lake that distance approximately 4.5 km from stations show the maximum similarity

  18. From Fibonacci Sequence to the Golden Ratio

    Directory of Open Access Journals (Sweden)

    Alberto Fiorenza

    2013-01-01

    Full Text Available We consider the well-known characterization of the Golden ratio as limit of the ratio of consecutive terms of the Fibonacci sequence, and we give an explanation of this property in the framework of the Difference Equations Theory. We show that the Golden ratio coincides with this limit not because it is the root with maximum modulus and multiplicity of the characteristic polynomial, but, from a more general point of view, because it is the root with maximum modulus and multiplicity of a restricted set of roots, which in this special case coincides with the two roots of the characteristic polynomial. This new perspective is the heart of the characterization of the limit of ratio of consecutive terms of all linear homogeneous recurrences with constant coefficients, without any assumption on the roots of the characteristic polynomial, which may be, in particular, also complex and not real.

  19. Measurement of four-particle cumulants and symmetric cumulants with subevent methods in small collision systems with the ATLAS detector

    CERN Document Server

    Derendarz, Dominik; The ATLAS collaboration

    2018-01-01

    Measurements of symmetric cumulants SC(n,m)=⟨v2nv2m⟩−⟨v2n⟩⟨v2m⟩ for (n,m)=(2,3) and (2,4) and asymmetric cumulant AC(n) are presented in pp, p+Pb and peripheral Pb+Pb collisions at various collision energies, aiming to probe the long-range collective nature of multi-particle production in small systems. Results are obtained using the standard cumulant method, as well as the two-subevent and three-subevent cumulant methods. Results from the standard method are found to be strongly biased by non-flow correlations as indicated by strong sensitivity to the chosen event class definition. A systematic reduction of non-flow effects is observed when using the two-subevent method and the results become independent of event class definition when the three-subevent method is used. The measured SC(n,m) shows an anti-correlation between v2 and v3, and a positive correlation between v2 and v4. The magnitude of SC(n,m) is constant with Nch in pp collisions, but increases with Nch in p+Pb and Pb+Pb collisions. ...

  20. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  1. Morphology of the cumulative logistic distribution when used as a model of radiologic film characteristic curves

    International Nuclear Information System (INIS)

    Prince, J.R.

    1988-01-01

    The cumulative logistic distribution (CLD) is an empiric model for film characteristic curves. Characterizing the shape parameters of the CLD in terms of contrast, latitude and speed is required. The CLD is written as Υ-F=D/[1+EXP-(Κ+κ 1 X)] where Υ is the optical density (OD) at log exposure X, F is fog level, D is a constant equal to Dm-F, Κ and κ 1 are shape parameters, and Dm is the maximum attainable OD. Further analysis demonstrates that when Κ is held constant, Κ 1 characterizes contrast (the larger κ 1 , the greater the contrast) and hence latitude; when κ 1 is held constant, Κ characterizes film speed (the larger Κ is, the faster the film). These equations and concepts are further illustrated with examples from radioscintigraphy, diagnostic radiology, and light sensitometry

  2. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  3. A tropospheric ozone maximum over the equatorial Southern Indian Ocean

    Directory of Open Access Journals (Sweden)

    L. Zhang

    2012-05-01

    Full Text Available We examine the distribution of tropical tropospheric ozone (O3 from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O3 during 2005 to 2009 reveal a distinct, persistent O3 maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O3 observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O3 maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O3 maximum is dominated by the O3 production driven by lightning nitrogen oxides (NOx emissions, which accounts for 62% of the tropospheric column O3 in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O3 maximum are rather small. The O3 productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O3 maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O3 maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.

  4. Spatio-temporal observations of the tertiary ozone maximum

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2009-07-01

    Full Text Available We present spatio-temporal distributions of the tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at an altitude of ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time to obtain spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.

    The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of the tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.

    Since ozone in the mesosphere is very sensitive to HOx concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HOx enhancement from the increased ionization.

  5. High cumulative incidence of urinary tract transitional cell carcinoma after kidney transplantation in Taiwan.

    Science.gov (United States)

    Wu, Ming-Ju; Lian, Jong-Da; Yang, Chi-Rei; Cheng, Chi-Hung; Chen, Cheng-Hsu; Lee, Wen-Chin; Shu, Kuo-Hsiung; Tang, Ming-Jer

    2004-06-01

    Cancer is a well-documented complication after kidney transplantation. Increased incidence of bladder cancer had been reported in long-term hemodialysis patients in Taiwan. Herein, the authors report a very high cumulative incidence of transitional cell carcinoma (TCC) of the urinary tract after kidney transplantation in Taiwan. The authors retrospectively reviewed the clinical data, medical records, and outcome of 730 kidney transplant (KT) recipients. The cumulative incidence of TCC was computed. The Cox regression method was used to analysis the role of potential risk factors. After a mean follow-up duration of 72.2 +/- 54.4 months, 69 cancers were diagnosed in 63 (8.6%) KT recipients. Of them, 30 cases (4.1%) were TCC. The cumulative incidence for TCC was 3.0% after 3 years of graft survival, increasing to 7.2% at 6 years and 17.5% at 10 years. Compared with the general population in Taiwan, the standardized mortality ratio was 398.4 (male, 192.6; female, 875.6). Painless gross hematuria was the cardinal initial symptom in 22 (73.3%) of the 30 KT recipients with TCC. Another 4 (13.3%) KT recipients with TCC presented with chronic urinary tract infection (UTI). Bilateral nephroureterectomy with removal of bladder cuffs was performed in 18 (60%) patients. Synchronous TCC in bilateral upper urinary tracts was confirmed in 11 (36.7%) of KT recipients with TCC. The age at the time of KT, female sex, compound analgesics usage, Chinese herb usage, and underground water intake had statistical significance as risk factors (P Taiwan, with an incidence of 4.1%. This study indicates that hematuria and chronic UTI are the initial presentation of TCC in KT recipients. Carefully urologic screening is indicated for patients with high risk for TCC, including those with older age, compound analgesics usage, Chinese herbs usage, and underground water intake as well as women.

  6. Predictive Value of Cumulative Blood Pressure for All-Cause Mortality and Cardiovascular Events

    Science.gov (United States)

    Wang, Yan Xiu; Song, Lu; Xing, Ai Jun; Gao, Ming; Zhao, Hai Yan; Li, Chun Hui; Zhao, Hua Ling; Chen, Shuo Hua; Lu, Cheng Zhi; Wu, Shou Ling

    2017-02-01

    The predictive value of cumulative blood pressure (BP) on all-cause mortality and cardiovascular and cerebrovascular events (CCE) has hardly been studied. In this prospective cohort study including 52,385 participants from the Kailuan Group who attended three medical examinations and without CCE, the impact of cumulative systolic BP (cumSBP) and cumulative diastolic BP (cumDBP) on all-cause mortality and CCEs was investigated. For the study population, the mean (standard deviation) age was 48.82 (11.77) years of which 40,141 (76.6%) were male. The follow-up for all-cause mortality and CCEs was 3.96 (0.48) and 2.98 (0.41) years, respectively. Multivariate Cox proportional hazards regression analysis showed that for every 10 mm Hg·year increase in cumSBP and 5 mm Hg·year increase in cumDBP, the hazard ratio for all-cause mortality were 1.013 (1.006, 1.021) and 1.012 (1.006, 1.018); for CCEs, 1.018 (1.010, 1.027) and 1.017 (1.010, 1.024); for stroke, 1.021 (1.011, 1.031) and 1.018 (1.010, 1.026); and for MI, 1.013 (0.996, 1.030) and 1.015 (1.000, 1.029). Using natural spline function analysis, cumSBP and cumDBP showed a J-curve relationship with CCEs; and a U-curve relationship with stroke (ischemic stroke and hemorrhagic stroke). Therefore, increases in cumSBP and cumDBP were predictive for all-cause mortality, CCEs, and stroke.

  7. Cumulants of heat transfer across nonlinear quantum systems

    Science.gov (United States)

    Li, Huanan; Agarwalla, Bijay Kumar; Li, Baowen; Wang, Jian-Sheng

    2013-12-01

    We consider thermal conduction across a general nonlinear phononic junction. Based on two-time observation protocol and the nonequilibrium Green's function method, heat transfer in steady-state regimes is studied, and practical formulas for the calculation of the cumulant generating function are obtained. As an application, the general formalism is used to study anharmonic effects on fluctuation of steady-state heat transfer across a single-site junction with a quartic nonlinear on-site pinning potential. An explicit nonlinear modification to the cumulant generating function exact up to the first order is given, in which the Gallavotti-Cohen fluctuation symmetry is found still valid. Numerically a self-consistent procedure is introduced, which works well for strong nonlinearity.

  8. A cumulant functional for static and dynamic correlation

    International Nuclear Information System (INIS)

    Hollett, Joshua W.; Hosseini, Hessam; Menzies, Cameron

    2016-01-01

    A functional for the cumulant energy is introduced. The functional is composed of a pair-correction and static and dynamic correlation energy components. The pair-correction and static correlation energies are functionals of the natural orbitals and the occupancy transferred between near-degenerate orbital pairs, rather than the orbital occupancies themselves. The dynamic correlation energy is a functional of the statically correlated on-top two-electron density. The on-top density functional used in this study is the well-known Colle-Salvetti functional. Using the cc-pVTZ basis set, the functional effectively models the bond dissociation of H 2 , LiH, and N 2 with equilibrium bond lengths and dissociation energies comparable to those provided by multireference second-order perturbation theory. The performance of the cumulant functional is less impressive for HF and F 2 , mainly due to an underestimation of the dynamic correlation energy by the Colle-Salvetti functional.

  9. Fragmentation of tensor polarized deuterons into cumulative pions

    International Nuclear Information System (INIS)

    Afanas'ev, S.; Arkhipov, V.; Bondarev, V.

    1998-01-01

    The tensor analyzing power T 20 of the reaction d polarized + A → π - (0 0 ) + X has been measured in the fragmentation of 9 GeV tensor polarized deuterons into pions with momenta from 3.5 to 5.3 GeV/c on hydrogen, beryllium and carbon targets. This kinematic range corresponds to the region of cumulative hadron production with the cumulative variable x c from 1.08 to 1.76. The values of T 20 have been found to be small and consistent with positive values. This contradicts the predictions based on a direct mechanism assuming NN collision between a high momentum nucleon in the deuteron and a target nucleon (NN → NNπ)

  10. Experience of cumulative effects assessment in the UK

    Directory of Open Access Journals (Sweden)

    Piper Jake

    2004-01-01

    Full Text Available Cumulative effects assessment (CEA is a development of environmental impact assessment which attempts to take into account the wider picture of what impacts may affect the environment as a result of either multiple or linear projects, or development plans. CEA is seen as a further valuable tool in promoting sustainable development. The broader canvas upon which the assessment is made leads to a suite of issues such as complexity in methods and assessment of significance, the desirability of co-operation between developers and other parties, new ways of addressing mitigation and monitoring. After outlining the legislative position and the process of CEA, this paper looks at three cases studies in the UK where cumulative assessment has been carried out - the cases concern wind farms, major infrastructure and off-shore developments.

  11. Ecosystem assessment methods for cumulative effects at the regional scale

    International Nuclear Information System (INIS)

    Hunsaker, C.T.

    1989-01-01

    Environmental issues such as nonpoint-source pollution, acid rain, reduced biodiversity, land use change, and climate change have widespread ecological impacts and require an integrated assessment approach. Since 1978, the implementing regulations for the National Environmental Policy Act (NEPA) have required assessment of potential cumulative environmental impacts. Current environmental issues have encouraged ecologists to improve their understanding of ecosystem process and function at several spatial scales. However, management activities usually occur at the local scale, and there is little consideration of the potential impacts to the environmental quality of a region. This paper proposes that regional ecological risk assessment provides a useful approach for assisting scientists in accomplishing the task of assessing cumulative impacts. Critical issues such as spatial heterogeneity, boundary definition, and data aggregation are discussed. Examples from an assessment of acidic deposition effects on fish in Adirondack lakes illustrate the importance of integrated data bases, associated modeling efforts, and boundary definition at the regional scale

  12. Polarization in high Psub(trans) and cumulative hadron production

    International Nuclear Information System (INIS)

    Efremov, A.V.

    1978-01-01

    The final hadron polarization in the high Psub(trans) processes is analyzed in the parton hard scattering picture. Scaling assumption allows a correct qualitative description to be given for the Psub(trans)-behaviour of polarization or escape angle behaviour in cumulative production. The energy scaling and weak dependence on the beam and target type is predicted. A method is proposed for measuring the polarization of hadron jets

  13. Seasonal climate change patterns due to cumulative CO2 emissions

    Science.gov (United States)

    Partanen, Antti-Ilari; Leduc, Martin; Damon Matthews, H.

    2017-07-01

    Cumulative CO2 emissions are near linearly related to both global and regional changes in annual-mean surface temperature. These relationships are known as the transient climate response to cumulative CO2 emissions (TCRE) and the regional TCRE (RTCRE), and have been shown to remain approximately constant over a wide range of cumulative emissions. Here, we assessed how well this relationship holds for seasonal patterns of temperature change, as well as for annual-mean and seasonal precipitation patterns. We analyzed an idealized scenario with CO2 concentration growing at an annual rate of 1% using data from 12 Earth system models from the Coupled Model Intercomparison Project Phase 5 (CMIP5). Seasonal RTCRE values for temperature varied considerably, with the highest seasonal variation evident in the Arctic, where RTCRE was about 5.5 °C per Tt C for boreal winter and about 2.0 °C per Tt C for boreal summer. Also the precipitation response in the Arctic during boreal winter was stronger than during other seasons. We found that emission-normalized seasonal patterns of temperature change were relatively robust with respect to time, though they were sub-linear with respect to emissions particularly near the Arctic. Moreover, RTCRE patterns for precipitation could not be quantified robustly due to the large internal variability of precipitation. Our results suggest that cumulative CO2 emissions are a useful metric to predict regional and seasonal changes in precipitation and temperature. This extension of the TCRE framework to seasonal and regional climate change is helpful for communicating the link between emissions and climate change to policy-makers and the general public, and is well-suited for impact studies that could make use of estimated regional-scale climate changes that are consistent with the carbon budgets associated with global temperature targets.

  14. Firm heterogeneity, Rules of Origin and Rules of Cumulation

    OpenAIRE

    Bombarda , Pamela; Gamberoni , Elisa

    2013-01-01

    We analyse the impact of relaxing rules of origin (ROOs) in a simple setting with heterogeneous firms that buy intermediate inputs from domestic and foreign sources. In particular, we consider the impact of switching from bilateral to diagonal cumulation when using preferences (instead of paying the MFN tariff) involving the respect of rules of origin. We find that relaxing the restrictiveness of the ROOs leads the least productive exporters to stop exporting. The empirical part confirms thes...

  15. Cumulant approach to dynamical correlation functions at finite temperatures

    International Nuclear Information System (INIS)

    Tran Minhtien.

    1993-11-01

    A new theoretical approach, based on the introduction of cumulants, to calculate thermodynamic averages and dynamical correlation functions at finite temperatures is developed. The method is formulated in Liouville instead of Hilbert space and can be applied to operators which do not require to satisfy fermion or boson commutation relations. The application of the partitioning and projection methods for the dynamical correlation functions is discussed. The present method can be applied to weakly as well as to strongly correlated systems. (author). 9 refs

  16. Severe occupational hand eczema, job stress and cumulative sickness absence.

    Science.gov (United States)

    Böhm, D; Stock Gissendanner, S; Finkeldey, F; John, S M; Werfel, T; Diepgen, T L; Breuer, K

    2014-10-01

    Stress is known to activate or exacerbate dermatoses, but the relationships between chronic stress, job-related stress and sickness absence among occupational hand eczema (OHE) patients are inadequately understood. To see whether chronic stress or burnout symptoms were associated with cumulative sickness absence in patients with OHE and to determine which factors predicted sickness absence in a model including measures of job-related and chronic stress. We investigated correlations of these factors in employed adult inpatients with a history of sickness absence due to OHE in a retrospective cross-sectional explorative study, which assessed chronic stress (Trier Inventory for the Assessment of Chronic Stress), burnout (Shirom Melamed Burnout Measure), clinical symptom severity (Osnabrück Hand Eczema Severity Index), perceived symptom severity, demographic characteristics and cumulative days of sickness absence. The study group consisted of 122 patients. OHE symptoms were not more severe among patients experiencing greater stress and burnout. Women reported higher levels of chronic stress on some measures. Cumulative days of sickness absence correlated with individual dimensions of job-related stress and, in multiple regression analysis, with an overall measure of chronic stress. Chronic stress is an additional factor predicting cumulative sickness absence among severely affected OHE patients. Other relevant factors for this study sample included the 'cognitive weariness' subscale of the Shirom Melamed Burnout Measure and the physical component summary score of the SF-36, a measure of health-related life quality. Prevention and rehabilitation should take job stress into consideration in multidisciplinary treatment strategies for severely affected OHE patients. © The Author 2014. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Science and Societal Partnerships to Address Cumulative Impacts

    OpenAIRE

    Lundquist, Carolyn J.; Fisher, Karen T.; Le Heron, Richard; Lewis, Nick I.; Ellis, Joanne I.; Hewitt, Judi E.; Greenaway, Alison J.; Cartner, Katie J.; Burgess-Jones, Tracey C.; Schiel, David R.; Thrush, Simon F.

    2016-01-01

    Funding and priorities for ocean research are not separate from the underlying sociological, economic, and political landscapes that determine values attributed to ecological systems. Here we present a variation on science prioritization exercises, focussing on inter-disciplinary research questions with the objective of shifting broad scale management practices to better address cumulative impacts and multiple users. Marine scientists in New Zealand from a broad range of scientific and social...

  18. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  19. Signal anomaly detection using modified CUSUM [cumulative sum] method

    International Nuclear Information System (INIS)

    Morgenstern, V.; Upadhyaya, B.R.; Benedetti, M.

    1988-01-01

    An important aspect of detection of anomalies in signals is the identification of changes in signal behavior caused by noise, jumps, changes in band-width, sudden pulses and signal bias. A methodology is developed to identify, isolate and characterize these anomalies using a modification of the cumulative sum (CUSUM) approach. The new algorithm performs anomaly detection at three levels and is implemented on a general purpose computer. 7 refs., 4 figs

  20. VVER-1000 dominance ratio

    International Nuclear Information System (INIS)

    Gorodkov, S.

    2009-01-01

    Dominance ratio, or more precisely, its closeness to unity, is important characteristic of large reactor. It allows evaluate beforehand the number of source iterations required in deterministic calculations of power spatial distribution. Or the minimal number of histories to be modeled for achievement of statistical error level desired in large core Monte Carlo calculations. In this work relatively simple approach for dominance ratio evaluation is proposed. It essentially uses core symmetry. Dependence of dominance ratio on neutron flux spatial distribution is demonstrated. (author)

  1. WWER-1000 dominance ratio

    International Nuclear Information System (INIS)

    Gorodkov, S.S.

    2009-01-01

    Dominance ratio, or more precisely, its closeness to unity, is important characteristic of large reactor. It allows evaluate beforehand the number of source iterations required in deterministic calculations of power spatial distribution. Or the minimal number of histories to be modeled for achievement of statistical error level desired in large core Monte Carlo calculations. In this work relatively simple approach for dominance ratio evaluation is proposed. It essentially uses core symmetry. Dependence of dominance ratio on neutron flux spatial distribution is demonstrated. (Authors)

  2. Problems of describing the cumulative effect in relativistic nuclear physics

    International Nuclear Information System (INIS)

    Baldin, A.M.

    1979-01-01

    The problem of describing the cumulative effect i.e., the particle production on nuclei in the range kinematically forbidden for one-nucleon collisions, is studied. Discrimination of events containing cumulative particles fixes configurations in the wave function of a nucleus, when several nucleons are closely spaced and their quark-parton components are collectivized. For the cumulative processes under consideration large distances between quarks are very important. The fundamental facts and theoretical interpretation of the quantum field theory and of the condensed media theory in the relativistic nuclear physics are presented in brief. The collisions of the relativistic nuclei with low momentum transfers is considered in a fast moving coordinate system. The basic parameter determining this type of collisions is the energy of nucleon binding in nuclei. It has been shown that the short-range correlation model provides a good presentation of many characteristics of the multiple particle production and it may be regarded as an approximate universal property of hadron interactions

  3. Dynamic prediction of cumulative incidence functions by direct binomial regression.

    Science.gov (United States)

    Grand, Mia K; de Witte, Theo J M; Putter, Hein

    2018-03-25

    In recent years there have been a series of advances in the field of dynamic prediction. Among those is the development of methods for dynamic prediction of the cumulative incidence function in a competing risk setting. These models enable the predictions to be updated as time progresses and more information becomes available, for example when a patient comes back for a follow-up visit after completing a year of treatment, the risk of death, and adverse events may have changed since treatment initiation. One approach to model the cumulative incidence function in competing risks is by direct binomial regression, where right censoring of the event times is handled by inverse probability of censoring weights. We extend the approach by combining it with landmarking to enable dynamic prediction of the cumulative incidence function. The proposed models are very flexible, as they allow the covariates to have complex time-varying effects, and we illustrate how to investigate possible time-varying structures using Wald tests. The models are fitted using generalized estimating equations. The method is applied to bone marrow transplant data and the performance is investigated in a simulation study. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Cumulative Risk Assessment Toolbox: Methods and Approaches for the Practitioner

    Directory of Open Access Journals (Sweden)

    Margaret M. MacDonell

    2013-01-01

    Full Text Available The historical approach to assessing health risks of environmental chemicals has been to evaluate them one at a time. In fact, we are exposed every day to a wide variety of chemicals and are increasingly aware of potential health implications. Although considerable progress has been made in the science underlying risk assessments for real-world exposures, implementation has lagged because many practitioners are unaware of methods and tools available to support these analyses. To address this issue, the US Environmental Protection Agency developed a toolbox of cumulative risk resources for contaminated sites, as part of a resource document that was published in 2007. This paper highlights information for nearly 80 resources from the toolbox and provides selected updates, with practical notes for cumulative risk applications. Resources are organized according to the main elements of the assessment process: (1 planning, scoping, and problem formulation; (2 environmental fate and transport; (3 exposure analysis extending to human factors; (4 toxicity analysis; and (5 risk and uncertainty characterization, including presentation of results. In addition to providing online access, plans for the toolbox include addressing nonchemical stressors and applications beyond contaminated sites and further strengthening resource accessibility to support evolving analyses for cumulative risk and sustainable communities.

  5. Energy Current Cumulants in One-Dimensional Systems in Equilibrium

    Science.gov (United States)

    Dhar, Abhishek; Saito, Keiji; Roy, Anjan

    2018-06-01

    A recent theory based on fluctuating hydrodynamics predicts that one-dimensional interacting systems with particle, momentum, and energy conservation exhibit anomalous transport that falls into two main universality classes. The classification is based on behavior of equilibrium dynamical correlations of the conserved quantities. One class is characterized by sound modes with Kardar-Parisi-Zhang scaling, while the second class has diffusive sound modes. The heat mode follows Lévy statistics, with different exponents for the two classes. Here we consider heat current fluctuations in two specific systems, which are expected to be in the above two universality classes, namely, a hard particle gas with Hamiltonian dynamics and a harmonic chain with momentum conserving stochastic dynamics. Numerical simulations show completely different system-size dependence of current cumulants in these two systems. We explain this numerical observation using a phenomenological model of Lévy walkers with inputs from fluctuating hydrodynamics. This consistently explains the system-size dependence of heat current fluctuations. For the latter system, we derive the cumulant-generating function from a more microscopic theory, which also gives the same system-size dependence of cumulants.

  6. Preference, resistance to change, and the cumulative decision model.

    Science.gov (United States)

    Grace, Randolph C

    2018-01-01

    According to behavioral momentum theory (Nevin & Grace, 2000a), preference in concurrent chains and resistance to change in multiple schedules are independent measures of a common construct representing reinforcement history. Here I review the original studies on preference and resistance to change in which reinforcement variables were manipulated parametrically, conducted by Nevin, Grace and colleagues between 1997 and 2002, as well as more recent research. The cumulative decision model proposed by Grace and colleagues for concurrent chains is shown to provide a good account of both preference and resistance to change, and is able to predict the increased sensitivity to reinforcer rate and magnitude observed with constant-duration components. Residuals from fits of the cumulative decision model to preference and resistance to change data were positively correlated, supporting the prediction of behavioral momentum theory. Although some questions remain, the learning process assumed by the cumulative decision model, in which outcomes are compared against a criterion that represents the average outcome value in the current context, may provide a plausible model for the acquisition of differential resistance to change. © 2018 Society for the Experimental Analysis of Behavior.

  7. Stakeholder attitudes towards cumulative and aggregate exposure assessment of pesticides.

    Science.gov (United States)

    Verbeke, Wim; Van Loo, Ellen J; Vanhonacker, Filiep; Delcour, Ilse; Spanoghe, Pieter; van Klaveren, Jacob D

    2015-05-01

    This study evaluates the attitudes and perspectives of different stakeholder groups (agricultural producers, pesticide manufacturers, trading companies, retailers, regulators, food safety authorities, scientists and NGOs) towards the concepts of cumulative and aggregate exposure assessment of pesticides by means of qualitative in-depth interviews (n = 15) and a quantitative stakeholder survey (n = 65). The stakeholders involved generally agreed that the use of chemical pesticides is needed, primarily for meeting the need of feeding the growing world population, while clearly acknowledging the problematic nature of human exposure to pesticide residues. Current monitoring was generally perceived to be adequate, but the timeliness and consistency of monitoring practices across countries were questioned. The concept of cumulative exposure assessment was better understood by stakeholders than the concept of aggregate exposure assessment. Identified pitfalls were data availability, data limitations, sources and ways of dealing with uncertainties, as well as information and training needs. Regulators and food safety authorities were perceived as the stakeholder groups for whom cumulative and aggregate pesticide exposure assessment methods and tools would be most useful and acceptable. Insights obtained from this exploratory study have been integrated in the development of targeted and stakeholder-tailored dissemination and training programmes that were implemented within the EU-FP7 project ACROPOLIS. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. A discussion about maximum uranium concentration in digestion solution of U3O8 type uranium ore concentrate

    International Nuclear Information System (INIS)

    Xia Dechang; Liu Chao

    2012-01-01

    On the basis of discussing the influence of single factor on maximum uranium concentration in digestion solution,the influence degree of some factors such as U content, H 2 O content, mass ratio of P and U was compared and analyzed. The results indicate that the relationship between U content and maximum uranium concentration in digestion solution was direct ratio, while the U content increases by 1%, the maximum uranium concentration in digestion solution increases by 4.8%-5.7%. The relationship between H 2 O content and maximum uranium concentration in digestion solution was inverse ratio, the maximum uranium concentration in digestion solution decreases by 46.1-55.2 g/L while H 2 O content increases by 1%. The relationship between mass ratio of P and U and maximum uranium concentration in digestion solution was inverse ratio, the maximum uranium concentration in digestion solution decreases by 116.0-181.0 g/L while the mass ratio of P and U increase 0.1%. When U content equals 62.5% and the influence of mass ratio of P and U is no considered, the maximum uranium concentration in digestion solution equals 1 578 g/L; while mass ratio of P and U equals 0.35%, the maximum uranium concentration decreases to 716 g/L, the decreased rate is 54.6%, so the mass ratio of P and U in U 3 O 8 type uranium ore concentrate is the main controlling factor. (authors)

  9. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  10. Evolution of costly explicit memory and cumulative culture.

    Science.gov (United States)

    Nakamaru, Mayuko

    2016-06-21

    Humans can acquire new information and modify it (cumulative culture) based on their learning and memory abilities, especially explicit memory, through the processes of encoding, consolidation, storage, and retrieval. Explicit memory is categorized into semantic and episodic memories. Animals have semantic memory, while episodic memory is unique to humans and essential for innovation and the evolution of culture. As both episodic and semantic memory are needed for innovation, the evolution of explicit memory influences the evolution of culture. However, previous theoretical studies have shown that environmental fluctuations influence the evolution of imitation (social learning) and innovation (individual learning) and assume that memory is not an evolutionary trait. If individuals can store and retrieve acquired information properly, they can modify it and innovate new information. Therefore, being able to store and retrieve information is essential from the perspective of cultural evolution. However, if both storage and retrieval were too costly, forgetting and relearning would have an advantage over storing and retrieving acquired information. In this study, using mathematical analysis and individual-based simulations, we investigate whether cumulative culture can promote the coevolution of costly memory and social and individual learning, assuming that cumulative culture improves the fitness of each individual. The conclusions are: (1) without cumulative culture, a social learning cost is essential for the evolution of storage-retrieval. Costly storage-retrieval can evolve with individual learning but costly social learning does not evolve. When low-cost social learning evolves, the repetition of forgetting and learning is favored more than the evolution of costly storage-retrieval, even though a cultural trait improves the fitness. (2) When cumulative culture exists and improves fitness, storage-retrieval can evolve with social and/or individual learning, which

  11. Volunteering and health benefits in general adults: cumulative effects and forms.

    Science.gov (United States)

    Yeung, Jerf W K; Zhang, Zhuoni; Kim, Tae Yeun

    2017-07-11

    Although the health benefits of volunteering have been well documented, no research has examined its cumulative effects according to other-oriented and self-oriented volunteering on multiple health outcomes in the general adult public. This study examined other-oriented and self-oriented volunteering in cumulative contribution to health outcomes (mental and physical health, life satisfaction, social well-being and depression). Data were drawn from the Survey of Texas Adults 2004, which contains a statewide population-based sample of adults (n = 1504). Multivariate linear regression and Wald test of parameters equivalence constraint were used to test the relationships. Both forms of volunteering were significantly related to better health outcomes (odds ratios = 3.66% to 11.11%), except the effect of self-oriented volunteering on depression. Other-oriented volunteering was found to have better health benefits than did self-volunteering. Volunteering should be promoted by public health, education and policy practitioners as a kind of healthy lifestyle, especially for the social subgroups of elders, ethnic minorities, those with little education, single people, and unemployed people, who generally have poorer health and less participation in volunteering.

  12. A Graphite Isotope Ratio Method: A Primer on Estimating Plutonium Production in Graphite Moderated Reactors

    International Nuclear Information System (INIS)

    Gesh, Christopher J.

    2004-01-01

    The Graphite Isotope Ratio Method (GIRM) is a technique used to estimate the total plutonium production in a graphite-moderated reactor. The cumulative plutonium production in that reactor can be accurately determined by measuring neutron irradiation induced isotopic ratio changes in certain impurity elements within the graphite moderator. The method does not require detailed knowledge of a reactor's operating history, although that knowledge can decrease the uncertainty of the production estimate. The basic premise of the Graphite Isotope Ratio Method is that the fluence in non-fuel core components is directly related to the cumulative plutonium production in the nuclear fuel

  13. El Carreto o Cumulá - Aspidosperma Dugandii Standl El Carreto o Cumulá - Aspidosperma Dugandii Standl

    Directory of Open Access Journals (Sweden)

    Dugand Armando

    1944-03-01

    Full Text Available Nombres vulgares: Carreto (Atlántico, Bolívar, Magdalena; Cumulá, Cumulá (Cundinamarca, ToIima. Según el Dr. Emilio Robledo (Lecciones de Bot. ed. 3, 2: 544. 1939 el nombre Carreto también es empleado en Puerto Berrío (Antioquia. El mismo autor (loc. cit. da el nombre Comulá para una especie indeterminada de Viburnum en Mariquita (Tolima y J. M. Duque, refiriendose a la misma planta y localidad (en Bot. Gen. Colomb. 340, 356. 1943 atribuye este nombre vulgar al Aspidosperma ellipticum Rusby.  Sin embargo, las muestras de madera de Cumulá o Comulá que yo he examinado, procedentes de la región de Mariquita -una de las cuales me fue recientemente enviada por el distinguido ictiólogo Sr. Cecil Miles- pertenecen sin duda alguna al A. Dugandii StandI. Por otra parte, Santiago Cortés (FI. Colomb. 206. 1898; ed, 2: 239. 1912 cita el Cumulá "de Anapoima y otros lugares del (rio Magdalena" diciendo que pertenece a las Leguminosas, pero la brevísima descripción que este autor hace de la madera "naranjada y notable por densidad, dureza y resistencia a la humedad", me induce a creer que se trata del mismo Cumula coleccionado recientemente en Tocaima, ya que esta población esta situada a pocos kilómetros de Anapoima. Nombres vulgares: Carreto (Atlántico, Bolívar, Magdalena; Cumulá, Cumulá (Cundinamarca, ToIima. Según el Dr. Emilio Robledo (Lecciones de Bot. ed. 3, 2: 544. 1939 el nombre Carreto también es empleado en Puerto Berrío (Antioquia. El mismo autor (loc. cit. da el nombre Comulá para una especie indeterminada de Viburnum en Mariquita (Tolima y J. M. Duque, refiriendose a la misma planta y localidad (en Bot. Gen. Colomb. 340, 356. 1943 atribuye este nombre vulgar al Aspidosperma ellipticum Rusby.  Sin embargo, las muestras de madera de Cumulá o Comulá que yo he examinado, procedentes de la región de Mariquita -una de las cuales me fue recientemente enviada por el distinguido ictiólogo Sr. Cecil Miles- pertenecen sin

  14. Detecting isotopic ratio outliers

    International Nuclear Information System (INIS)

    Bayne, C.K.; Smith, D.H.

    1985-01-01

    An alternative method is proposed for improving isotopic ratio estimates. This method mathematically models pulse-count data and uses iterative reweighted Poisson regression to estimate model parameters to calculate the isotopic ratios. This computer-oriented approach provides theoretically better methods than conventional techniques to establish error limits and to identify outliers. 6 refs., 3 figs., 3 tabs

  15. Detecting isotopic ratio outliers

    Science.gov (United States)

    Bayne, C. K.; Smith, D. H.

    An alternative method is proposed for improving isotopic ratio estimates. This method mathematically models pulse-count data and uses iterative reweighted Poisson regression to estimate model parameters to calculate the isotopic ratios. This computer-oriented approach provides theoretically better methods than conventional techniques to establish error limits and to identify outliers.

  16. Detecting isotopic ratio outliers

    International Nuclear Information System (INIS)

    Bayne, C.K.; Smith, D.H.

    1986-01-01

    An alternative method is proposed for improving isotopic ratio estimates. This method mathematically models pulse-count data and uses iterative reweighted Poisson regression to estimate model parameters to calculate the isotopic ratios. This computer-oriented approach provides theoretically better methods than conventional techniques to establish error limits and to identify outliers

  17. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  18. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  19. Seeking the epoch of maximum luminosity for dusty quasars

    International Nuclear Information System (INIS)

    Vardanyan, Valeri; Weedman, Daniel; Sargsyan, Lusine

    2014-01-01

    Infrared luminosities νL ν (7.8 μm) arising from dust reradiation are determined for Sloan Digital Sky Survey (SDSS) quasars with 1.4 maximum at any redshift z < 5, reaching a plateau for z ≳ 3 with maximum luminosity νL ν (7.8 μm) ≳ 10 47 erg s –1 ; luminosity functions show one quasar Gpc –3 having νL ν (7.8 μm) > 10 46.6 erg s –1 for all 2 maximum luminosity has not yet been identified at any redshift below 5. The most ultraviolet luminous quasars, defined by rest frame νL ν (0.25 μm), have the largest values of the ratio νL ν (0.25 μm)/νL ν (7.8 μm) with a maximum ratio at z = 2.9. From these results, we conclude that the quasars most luminous in the ultraviolet have the smallest dust content and appear luminous primarily because of lessened extinction. Observed ultraviolet/infrared luminosity ratios are used to define 'obscured' quasars as those having >5 mag of ultraviolet extinction. We present a new summary of obscured quasars discovered with the Spitzer Infrared Spectrograph and determine the infrared luminosity function of these obscured quasars at z ∼ 2.1. This is compared with infrared luminosity functions of optically discovered, unobscured quasars in the SDSS and in the AGN and Galaxy Evolution Survey. The comparison indicates comparable numbers of obscured and unobscured quasars at z ∼ 2.1 with a possible excess of obscured quasars at fainter luminosities.

  20. Deriving Light Interception and Biomass from Spectral Reflectance Ratio

    DEFF Research Database (Denmark)

    Christensen, Svend; Goudriaan, J.

    1993-01-01

    was calculated as the ratio between infrared (790–810 nm) and red (640–660 nm) reflectance. The cultivars form a different canopy structure. However, a regression analysis did not show any cultivar effect on the relation between RVI and fPAR The predicted fPAR from frequently measured RVI was used to calculate...... the product of daily fPAR and incoming PAR (cumulative PAR interception) in all spring barley cultivars grown in monoculture and in mixture with oil seed rape (Brassica napus). A regression analysis showed that the relation between cumulative intercepted PAR and total above ground biomass was the same in all...... monocultures and mixtures. The ratio α of incremental dry matter and intercepted PAR was normally 2.4 g MJ−1, but it declined below this value when temperatures fell below 12°C....

  1. A wavelet filtering method for cumulative gamma spectroscopy used in wear measurements

    International Nuclear Information System (INIS)

    Bianchi, Davide; Lenauer, Claudia; Betz, Gerhard; Vernes, András

    2017-01-01

    Continuous ultra-mild wear quantification using radioactive isotopes involves measuring very low amounts of activity in limited time intervals. This results in gamma spectra with poor signal-to-noise ratio and hence very scattered wear data, especially during running-in, where wear is intrinsically low. Therefore, advanced filtering methods reducing the wear data scattering and making the calculation of the main peak area more accurate are mandatory. An energy-time dependent threshold for wavelet detail coefficients based on Poisson statistics and using a combined Barwell law for the estimation of the average photon counting rate is then introduced. In this manner, it was shown that the accuracy of running-in wear quantification is enhanced. - Highlights: • Time-dependent Poisson statistics. • Wavelet-based filtering of cumulative gamma spectra. • Improvement of low wear analysis.

  2. The dose-response relationship between cumulative lifting load and lumbar disk degeneration based on magnetic resonance imaging findings.

    Science.gov (United States)

    Hung, Yu-Ju; Shih, Tiffany T-F; Chen, Bang-Bin; Hwang, Yaw-Huei; Ma, Li-Ping; Huang, Wen-Chuan; Liou, Saou-Hsing; Ho, Ing-Kang; Guo, Yue L

    2014-11-01

    Lumbar disk degeneration (LDD) has been related to heavy physical loading. However, the quantification of the exposure has been controversial, and the dose-response relationship with the LDD has not been established. The purpose of this study was to investigate the dose-response relationship between lifetime cumulative lifting load and LDD. This was a cross-sectional study. Every participant received assessments with a questionnaire, magnetic resonance imaging (MRI) of the lumbar spine, and estimation of lumbar disk compression load. The MRI assessments included assessment of disk dehydration, annulus tear, disk height narrowing, bulging, protrusion, extrusion, sequestration, degenerative and spondylolytic spondylolisthesis, foramina narrowing, and nerve root compression on each lumbar disk level. The compression load was predicted using a biomechanical software system. A total of 553 participants were recruited in this study and categorized into tertiles by cumulative lifting load (ie, lifting load. The best dose-response relationships were found at the L5-S1 disk level, in which high cumulative lifting load was associated with elevated odds ratios of 2.5 (95% confidence interval [95% CI]=1.5, 4.1) for dehydration and 4.1 (95% CI=1.9, 10.1) for disk height narrowing compared with low lifting load. Participants exposed to intermediate lifting load had an increased odds ratio of 2.1 (95% CI=1.3, 3.3) for bulging compared with low lifting load. The tests for trend were significant. There is no "gold standard" assessment tool for measuring the lumbar compression load. The results suggest a dose-response relationship between cumulative lifting load and LDD. © 2014 American Physical Therapy Association.

  3. Mismatch or cumulative stress : Toward an integrated hypothesis of programming effects

    NARCIS (Netherlands)

    Nederhof, Esther; Schmidt, Mathias V.

    2012-01-01

    This paper integrates the cumulative stress hypothesis with the mismatch hypothesis, taking into account individual differences in sensitivity to programming. According to the cumulative stress hypothesis, individuals are more likely to suffer from disease as adversity accumulates. According to the

  4. Determination of fission gas yields from isotope ratios

    DEFF Research Database (Denmark)

    Mogensen, Mogens Bjerg

    1983-01-01

    This paper describes a method of calculating the actual fission yield of Kr and Xe in nuclear fuel including the effect of neutron capture reactions and decay. The bases for this calculation are the cumulative yields (ref. 1) of Kr and Xe isotopes (or pairs of isotopes) which are unaffected...... by neutron capture reactions, and measured Kr and Xe isotope ratios. Also the burnup contribution from the different fissile heavy isotopes must be known in order to get accurate fission gas yields....

  5. Science and societal partnerships to address cumulative impacts

    Directory of Open Access Journals (Sweden)

    Carolyn J Lundquist

    2016-02-01

    Full Text Available Funding and priorities for ocean research are not separate from the underlying sociological, economic, and political landscapes that determine values attributed to ecological systems. Here we present a variation on science prioritisation exercises, focussing on inter-disciplinary research questions with the objective of shifting broad scale management practices to better address cumulative impacts and multiple users. Marine scientists in New Zealand from a broad range of scientific and social-scientific backgrounds ranked 48 statements of research priorities. At a follow up workshop, participants discussed five over-arching themes based on survey results. These themes were used to develop mechanisms to increase the relevance and efficiency of scientific research while acknowledging socio-economic and political drivers of research agendas in New Zealand’s ocean ecosystems. Overarching messages included the need to: 1 determine the conditions under which ‘surprises’ (sudden and substantive undesirable changes are likely to occur and the socio-ecological implications of such changes; 2 develop methodologies to reveal the complex and cumulative effects of change in marine systems, and their implications for resource use, stewardship, and restoration; 3 assess potential solutions to management issues that balance long-term and short-term benefits and encompass societal engagement in decision-making; 4 establish effective and appropriately resourced institutional networks to foster collaborative, solution-focused marine science; and 5 establish cross-disciplinary dialogues to translate diverse scientific and social-scientific knowledge into innovative regulatory, social and economic practice. In the face of multiple uses and cumulative stressors, ocean management frameworks must be adapted to build a collaborative framework across science, governance and society that can help stakeholders navigate uncertainties and socio-ecological surprises.

  6. Cumulative risk hypothesis: Predicting and preventing child maltreatment recidivism.

    Science.gov (United States)

    Solomon, David; Åsberg, Kia; Peer, Samuel; Prince, Gwendolyn

    2016-08-01

    Although Child Protective Services (CPS) and other child welfare agencies aim to prevent further maltreatment in cases of child abuse and neglect, recidivism is common. Having a better understanding of recidivism predictors could aid in preventing additional instances of maltreatment. A previous study identified two CPS interventions that predicted recidivism: psychotherapy for the parent, which was related to a reduced risk of recidivism, and temporary removal of the child from the parent's custody, which was related to an increased recidivism risk. However, counter to expectations, this previous study did not identify any other specific risk factors related to maltreatment recidivism. For the current study, it was hypothesized that (a) cumulative risk (i.e., the total number of risk factors) would significantly predict maltreatment recidivism above and beyond intervention variables in a sample of CPS case files and that (b) therapy for the parent would be related to a reduced likelihood of recidivism. Because it was believed that the relation between temporary removal of a child from the parent's custody and maltreatment recidivism is explained by cumulative risk, the study also hypothesized that that the relation between temporary removal of the child from the parent's custody and recidivism would be mediated by cumulative risk. After performing a hierarchical logistic regression analysis, the first two hypotheses were supported, and an additional predictor, psychotherapy for the child, also was related to reduced chances of recidivism. However, Hypothesis 3 was not supported, as risk did not significantly mediate the relation between temporary removal and recidivism. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Cumulative or delayed nephrotoxicity after cisplatin (DDP) treatment.

    Science.gov (United States)

    Pinnarò, P; Ruggeri, E M; Carlini, P; Giovannelli, M; Cognetti, F

    1986-04-30

    The present retrospective study reports data regarding renal toxicity in 115 patients (63 males, 52 females; median age, 56 years) who received cumulative doses of cisplatin (DDP) greater than or equal to 200 mg/m2. DDP was administered alone or in combination at a dose of 50-70 mg/m2 in 91 patients, and at a dose of 100 mg/m2 in 22 patients. Two patients after progression of ovarian carcinoma treated with conventional doses of DDP received 4 and 2 courses, respectively, of high-dose DDP (40 mg/m2 for 5 days) in hypertonic saline. The median number of DDP courses was 6 (range 2-14), and the median cumulative dose was 350 mg/m2 (range, 200-1200). Serum creatinine and urea nitrogen were determined before initiating the treatment and again 13-16 days after each administration. The incidence of azotemia (creatinina levels that exceeded 1.5 mg/dl) was similar before (7.8%) and after (6.1%) DDP doses of 200 mg/m2. Azotemia appears to be related to the association of DDP with other potentially nephrotoxic antineoplastic drugs (methotrexate) more than to the dose per course of DDP. Of 59 patients followed for 2 months or more after discontinuing the DDP treatment, 3 (5.1%) presented creatinine values higher than 1.5 mg/dl. The data deny that the incidence of nephrotoxicity is higher in patients receiving higher cumulative doses of DDP and confirm that increases in serum creatinine levels may occur some time after discontinuation of the drug.

  8. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    2008-01-01

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm...

  9. Difference and ratio plots

    DEFF Research Database (Denmark)

    Svendsen, Anders Jørgen; Holmskov, U; Bro, Peter

    1995-01-01

    and systemic lupus erythematosus from another previously published study (Macanovic, M. and Lachmann, P.J. (1979) Clin. Exp. Immunol. 38, 274) are also represented using ratio plots. Our observations indicate that analysis by regression analysis may often be misleading....... hitherto unnoted differences between controls and patients with either rheumatoid arthritis or systemic lupus erythematosus. For this we use simple, but unconventional, graphic representations of the data, based on difference plots and ratio plots. Differences between patients with Burkitt's lymphoma...

  10. Cumulative exposure to phthalates from phthalate-containing drug products

    DEFF Research Database (Denmark)

    Ennis, Zandra Nymand; Broe, Anne; Pottegård, Anton

    2018-01-01

    European regulatory limit of exposure ranging between 380-1710 mg/year throughout the study period. Lithium-products constituted the majority of dibutyl phthalate exposure. Diethyl phthalate exposure, mainly caused by erythromycin, theophylline and diclofenac products, did not exceed the EMA regulatory...... to quantify annual cumulated phthalate exposure from drug products among users of phthalate-containing oral medications in Denmark throughout the period of 2004-2016. METHODS: We conducted a Danish nationwide cohort study using The Danish National Prescription Registry and an internal database held...

  11. Lyapunov exponent of the random frequency oscillator: cumulant expansion approach

    International Nuclear Information System (INIS)

    Anteneodo, C; Vallejos, R O

    2010-01-01

    We consider a one-dimensional harmonic oscillator with a random frequency, focusing on both the standard and the generalized Lyapunov exponents, λ and λ* respectively. We discuss the numerical difficulties that arise in the numerical calculation of λ* in the case of strong intermittency. When the frequency corresponds to a Ornstein-Uhlenbeck process, we compute analytically λ* by using a cumulant expansion including up to the fourth order. Connections with the problem of finding an analytical estimate for the largest Lyapunov exponent of a many-body system with smooth interactions are discussed.

  12. Exact probability distribution function for the volatility of cumulative production

    Science.gov (United States)

    Zadourian, Rubina; Klümper, Andreas

    2018-04-01

    In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.

  13. Numerical simulation of explosive magnetic cumulative generator EMG-720

    Energy Technology Data Exchange (ETDEWEB)

    Deryugin, Yu N; Zelenskij, D K; Kazakova, I F; Kargin, V I; Mironychev, P V; Pikar, A S; Popkov, N F; Ryaslov, E A; Ryzhatskova, E G [All-Russian Research Inst. of Experimental Physics, Sarov (Russian Federation)

    1997-12-31

    The paper discusses the methods and results of numerical simulations used in the development of a helical-coaxial explosive magnetic cumulative generator (EMG) with the stator up to 720 mm in diameter. In the process of designing, separate units were numerically modeled, as was the generator operation with a constant inductive-ohmic load. The 2-D processes of the armature acceleration by the explosion products were modeled as well as those of the formation of the sliding high-current contact between the armature and stator`s insulated turns. The problem of the armature integrity in the region of the detonation waves collision was numerically analyzed. 8 figs., 2 refs.

  14. Cumulative exergy losses associated with the production of lead metal

    Energy Technology Data Exchange (ETDEWEB)

    Szargut, J [Technical Univ. of Silesia, Gliwice (PL). Inst. of Thermal-Engineering; Morris, D R [New Brunswick Univ., Fredericton, NB (Canada). Dept. of Chemical Engineering

    1990-08-01

    Cumulative exergy losses result from the irreversibility of the links of a technological network leading from raw materials and fuels extracted from nature to the product under consideration. The sum of these losses can be apportioned into partial exergy losses (associated with particular links of the technological network) or into constituent exergy losses (associated with constituent subprocesses of the network). The methods of calculation of the partial and constituent exergy losses are presented, taking into account the useful byproducts substituting the major products of other processes. Analyses of partial and constituent exergy losses are made for the technological network of lead metal production. (author).

  15. Ozone pollution and ozone biomonitoring in European cities. Part I: Ozone concentrations and cumulative exposure indices at urban and suburban sites

    DEFF Research Database (Denmark)

    Klumpp, A.; Ansel, W.; Klumpp, G.

    2006-01-01

    In the frame of a European research project on air quality in urban agglomerations, data on ozone concentrations from 23 automated urban and suburban monitoring stations in 11 cities from seven countries were analysed and evaluated. Daily and summer mean and maximum concentrations were computed...... based on hourly mean values, and cumulative ozone exposure indices (Accumulated exposure Over a Threshold of 40 ppb (AOT40), AOT20) were calculated. The diurnal profiles showed a characteristic pattern in most city centres, with minimum values in the early morning hours, a strong rise during the morning......, by contrast, maximum values were lower and diurnal variation was much smaller. Based on ozone concentrations as well as on cumulative exposure indices, a clear north-south gradient in ozone pollution, with increasing levels from northern and northwestern sites to central and southern European sites...

  16. Maximum Work of Free-Piston Stirling Engine Generators

    Science.gov (United States)

    Kojima, Shinji

    2017-04-01

    Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.

  17. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  18. Modulation Classification of Satellite Communication Signals Using Cumulants and Neural Networks

    Science.gov (United States)

    Smith, Aaron; Evans, Michael; Downey, Joseph

    2017-01-01

    National Aeronautics and Space Administration (NASA)'s future communication architecture is evaluating cognitive technologies and increased system intelligence. These technologies are expected to reduce the operational complexity of the network, increase science data return, and reduce interference to self and others. In order to increase situational awareness, signal classification algorithms could be applied to identify users and distinguish sources of interference. A significant amount of previous work has been done in the area of automatic signal classification for military and commercial applications. As a preliminary step, we seek to develop a system with the ability to discern signals typically encountered in satellite communication. Proposed is an automatic modulation classifier which utilizes higher order statistics (cumulants) and an estimate of the signal-to-noise ratio. These features are extracted from baseband symbols and then processed by a neural network for classification. The modulation types considered are phase-shift keying (PSK), amplitude and phase-shift keying (APSK),and quadrature amplitude modulation (QAM). Physical layer properties specific to the Digital Video Broadcasting - Satellite- Second Generation (DVB-S2) standard, such as pilots and variable ring ratios, are also considered. This paper will provide simulation results of a candidate modulation classifier, and performance will be evaluated over a range of signal-to-noise ratios, frequency offsets, and nonlinear amplifier distortions.

  19. Expansion formulae for characteristics of cumulative cost in finite horizon production models

    NARCIS (Netherlands)

    Ayhan, H.; Schlegel, S.

    2001-01-01

    We consider the expected value and the tail probability of cumulative shortage and holding cost (i.e. the probability that cumulative cost is more than a certain value) in finite horizon production models. An exact expression is provided for the expected value of the cumulative cost for general

  20. The cumulative risk of false-positive screening results across screening centres in the Norwegian Breast Cancer Screening Program

    Energy Technology Data Exchange (ETDEWEB)

    Roman, M., E-mail: Marta.Roman@kreftregisteret.no [Cancer Registry of Norway, Oslo (Norway); Department of Women and Children’s Health, Oslo University Hospital, Oslo (Norway); Skaane, P., E-mail: PERSK@ous-hf.no [Department of Radiology, Oslo University Hospital Ullevaal, University of Oslo, Oslo (Norway); Hofvind, S., E-mail: Solveig.Hofvind@kreftregisteret.no [Cancer Registry of Norway, Oslo (Norway); Oslo and Akershus University College of Applied Sciences, Faculty of Health Science, Oslo (Norway)

    2014-09-15

    Highlights: • We found variation in early performance measures across screening centres. • Radiologists’ performance may play a key role in the variability. • Potential to improve the effectiveness of breast cancer screening programs. • Continuous surveillance of screening centres and radiologists is essential. - Abstract: Background: Recall for assessment in mammographic screening entails an inevitable number of false-positive screening results. This study aimed to investigate the variation in the cumulative risk of a false positive screening result and the positive predictive value across the screening centres in the Norwegian Breast Cancer Screening Program. Methods: We studied 618,636 women aged 50–69 years who underwent 2,090,575 screening exams (1996–2010. Recall rate, positive predictive value, rate of screen-detected cancer, and the cumulative risk of a false positive screening result, without and with invasive procedures across the screening centres were calculated. Generalized linear models were used to estimate the probability of a false positive screening result and to compute the cumulative false-positive risk for up to ten biennial screening examinations. Results: The cumulative risk of a false-positive screening exam varied from 10.7% (95% CI: 9.4–12.0%) to 41.5% (95% CI: 34.1–48.9%) across screening centres, with a highest to lowest ratio of 3.9 (95% CI: 3.7–4.0). The highest to lowest ratio for the cumulative risk of undergoing an invasive procedure with a benign outcome was 4.3 (95% CI: 4.0–4.6). The positive predictive value of recall varied between 12.0% (95% CI: 11.0–12.9%) and 19.9% (95% CI: 18.3–21.5%), with a highest to lowest ratio of 1.7 (95% CI: 1.5–1.9). Conclusions: A substantial variation in the performance measures across the screening centres in the Norwegian Breast Cancer Screening Program was identified, despite of similar administration, procedures, and quality assurance requirements. Differences in the

  1. The cumulative risk of false-positive screening results across screening centres in the Norwegian Breast Cancer Screening Program

    International Nuclear Information System (INIS)

    Roman, M.; Skaane, P.; Hofvind, S.

    2014-01-01

    Highlights: • We found variation in early performance measures across screening centres. • Radiologists’ performance may play a key role in the variability. • Potential to improve the effectiveness of breast cancer screening programs. • Continuous surveillance of screening centres and radiologists is essential. - Abstract: Background: Recall for assessment in mammographic screening entails an inevitable number of false-positive screening results. This study aimed to investigate the variation in the cumulative risk of a false positive screening result and the positive predictive value across the screening centres in the Norwegian Breast Cancer Screening Program. Methods: We studied 618,636 women aged 50–69 years who underwent 2,090,575 screening exams (1996–2010. Recall rate, positive predictive value, rate of screen-detected cancer, and the cumulative risk of a false positive screening result, without and with invasive procedures across the screening centres were calculated. Generalized linear models were used to estimate the probability of a false positive screening result and to compute the cumulative false-positive risk for up to ten biennial screening examinations. Results: The cumulative risk of a false-positive screening exam varied from 10.7% (95% CI: 9.4–12.0%) to 41.5% (95% CI: 34.1–48.9%) across screening centres, with a highest to lowest ratio of 3.9 (95% CI: 3.7–4.0). The highest to lowest ratio for the cumulative risk of undergoing an invasive procedure with a benign outcome was 4.3 (95% CI: 4.0–4.6). The positive predictive value of recall varied between 12.0% (95% CI: 11.0–12.9%) and 19.9% (95% CI: 18.3–21.5%), with a highest to lowest ratio of 1.7 (95% CI: 1.5–1.9). Conclusions: A substantial variation in the performance measures across the screening centres in the Norwegian Breast Cancer Screening Program was identified, despite of similar administration, procedures, and quality assurance requirements. Differences in the

  2. Effects of the hydraulic conductivity of the matrix/macropore interface on cumulative infiltrations into dual-permeability media

    Science.gov (United States)

    Lassabatere, L.; Peyrard, X.; Angulo-Jaramillo, R.; Simunek, J.

    2009-12-01

    Modeling of water infiltration into the vadose zone is important for better understanding of movement of water-transported contaminants. There is a great need to take into account the soil heterogeneity and, in particular, the presence of macropores or cracks that could generate preferential flow. Several mathematical models have been proposed to describe unsaturated flow through heterogeneous soils. The dual-permeability model (referred to as the 2K model) assumes that flow is governed by Richards equation in both porous regions (matrix and macropores). Water can be exchanged between the two regions following a first-order rate law. Although several studies have dealt with such modeling, no study has evaluated the influence of the hydraulic conductivity of the matrix/macropore interface on water cumulative infiltration. And this is the focus of this study. An analytical scaling method reveals the role of the following main parameters for given boundary and initial conditions: the saturated hydraulic conductivity ratio (R_Ks), the water pressure scale parameter ratio (R_hg), the saturated volumetric water content ratio (R_θs), and the shape parameters of the water retention and hydraulic conductivity functions. The last essential parameter is related to the interfacial hydraulic conductivity (Ka) between the macropore and matrix regions. The scaled 2K flow equations were solved using HYDRUS-1D 4.09 for the specific case of water infiltrating into an initially uniform soil profile and a zero pressure head at the soil surface. A sensitivity of water infiltration was studied for different sets of scale parameters (R_Ks, R_hg, R_θs, and shape parameters) and the scaled interfacial conductivity (Ka). Numerical results illustrate two extreme behaviors. When the interfacial conductivity is zero (i.e., no water exchange), water infiltrates separately into matrix and macropore regions, producing a much deeper moisture front in the macropore domain. In the opposite case

  3. Fatigue during maximal sprint cycling: unique role of cumulative contraction cycles.

    Science.gov (United States)

    Tomas, Aleksandar; Ross, Emma Z; Martin, James C

    2010-07-01

    Maximal cycling power has been reported to decrease more rapidly when performed with increased pedaling rates. Increasing pedaling rate imposes two constraints on the neuromuscular system: 1) decreased time for muscle excitation and relaxation and 2) increased muscle shortening velocity. Using two crank lengths allows the effects of time and shortening velocity to be evaluated separately. We conducted this investigation to determine whether the time available for excitation and relaxation or the muscle shortening velocity was mainly responsible for the increased rate of fatigue previously observed with increased pedaling rates and to evaluate the influence of other possible fatiguing constraints. Seven trained cyclists performed 30-s maximal isokinetic cycling trials using two crank lengths: 120 and 220 mm. Pedaling rate was optimized for maximum power for each crank length: 135 rpm for the 120-mm cranks (1.7 m x s(-1) pedal speed) and 109 rpm for the 220-mm cranks (2.5 m x s(-1) pedal speed). Power was recorded with an SRM power meter. Crank length did not affect peak power: 999 +/- 276 W for the 120-mm crank versus 1001 +/- 289 W for the 220-mm crank. Fatigue index was greater (58.6% +/- 3.7% vs 52.4% +/- 4.8%, P < 0.01), and total work was less (20.0 +/- 1.8 vs 21.4 +/- 2.0 kJ, P < 0.01) with the higher pedaling rate-shorter crank condition. Regression analyses indicated that the power for the two conditions was most highly related to cumulative work (r2 = 0.94) and to cumulative cycles (r2 = 0.99). These results support previous findings and confirm that pedaling rate, rather than pedal speed, was the main factor influencing fatigue. Our novel result was that power decreased by a similar increment with each crank revolution for the two conditions, indicating that each maximal muscular contraction induced a similar amount of fatigue.

  4. Limiting density ratios in piston-driven compressions

    International Nuclear Information System (INIS)

    Lee, S.

    1985-07-01

    By using global energy and pressure balance applied to a shock model it is shown that for a piston-driven fast compression, the maximum compression ratio is not dependent on the absolute magnitude of the piston power, but rather on the power pulse shape. Specific cases are considered and a maximum density compression ratio of 27 is obtained for a square-pulse power compressing a spherical pellet with specific heat ratio of 5/3. Double pulsing enhances the density compression ratio to 1750 in the case of linearly rising compression pulses. Using this method further enhancement by multiple pulsing becomes obvious. (author)

  5. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  6. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  7. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  8. Cumulative hierarchies and computability over universes of sets

    Directory of Open Access Journals (Sweden)

    Domenico Cantone

    2008-05-01

    Full Text Available Various metamathematical investigations, beginning with Fraenkel’s historical proof of the independence of the axiom of choice, called for suitable definitions of hierarchical universes of sets. This led to the discovery of such important cumulative structures as the one singled out by von Neumann (generally taken as the universe of all sets and Godel’s universe of the so-called constructibles. Variants of those are exploited occasionally in studies concerning the foundations of analysis (according to Abraham Robinson’s approach, or concerning non-well-founded sets. We hence offer a systematic presentation of these many structures, partly motivated by their relevance and pervasiveness in mathematics. As we report, numerous properties of hierarchy-related notions such as rank, have been verified with the assistance of the ÆtnaNova proof-checker.Through SETL and Maple implementations of procedures which effectively handle the Ackermann’s hereditarily finite sets, we illustrate a particularly significant case among those in which the entities which form a universe of sets can be algorithmically constructed and manipulated; hereby, the fruitful bearing on pure mathematics of cumulative set hierarchies ramifies into the realms of theoretical computer science and algorithmics.

  9. Cumulative Effects Assessment: Linking Social, Ecological, and Governance Dimensions

    Directory of Open Access Journals (Sweden)

    Marian Weber

    2012-06-01

    Full Text Available Setting social, economic, and ecological objectives is ultimately a process of social choice informed by science. In this special feature we provide a multidisciplinary framework for the use of cumulative effects assessment in land use planning. Forest ecosystems are facing considerable challenges driven by population growth and increasing demands for resources. In a suite of case studies that span the boreal forest of Western Canada to the interior Atlantic forest of Paraguay we show how transparent and defensible methods for scenario analysis can be applied in data-limited regions and how social dimensions of land use change can be incorporated in these methods, particularly in aboriginal communities that have lived in these ecosystems for generations. The case studies explore how scenario analysis can be used to evaluate various land use options and highlight specific challenges with identifying social and ecological responses, determining thresholds and targets for land use, and integrating local and traditional knowledge in land use planning. Given that land use planning is ultimately a value-laden and often politically charged process we also provide some perspective on various collective and expert-based processes for identifying cumulative impacts and thresholds. The need for good science to inform and be informed by culturally appropriate democratic processes calls for well-planned and multifaceted approaches both to achieve an informed understanding of both residents and governments of the interactive and additive changes caused by development, and to design action agendas to influence such change at the ecological and social level.

  10. Maternal distress and parenting in the context of cumulative disadvantage.

    Science.gov (United States)

    Arditti, Joyce; Burton, Linda; Neeves-Botelho, Sara

    2010-06-01

    This article presents an emergent conceptual model of the features and links between cumulative disadvantage, maternal distress, and parenting practices in low-income families in which parental incarceration has occurred. The model emerged from the integration of extant conceptual and empirical research with grounded theory analysis of longitudinal ethnographic data from Welfare, Children, and Families: A Three-City Study. Fourteen exemplar family cases were used in the analysis. Results indicated that mothers in these families experienced life in the context of cumulative disadvantage, reporting a cascade of difficulties characterized by neighborhood worries, provider concerns, bureaucratic difficulties, violent intimate relationships, and the inability to meet children's needs. Mothers, however, also had an intense desire to protect their children, and to make up for past mistakes. Although, in response to high levels of maternal distress and disadvantage, most mothers exhibited harsh discipline of their children, some mothers transformed their distress by advocating for their children under difficult circumstances. Women's use of harsh discipline and advocacy was not necessarily an "either/or" phenomenon as half of the mothers included in our analysis exhibited both harsh discipline and care/advocacy behaviors. Maternal distress characterized by substance use, while connected to harsh disciplinary behavior, did not preclude mothers engaging in positive parenting behaviors.

  11. Cumulant expansions for measuring water exchange using diffusion MRI

    Science.gov (United States)

    Ning, Lipeng; Nilsson, Markus; Lasič, Samo; Westin, Carl-Fredrik; Rathi, Yogesh

    2018-02-01

    The rate of water exchange across cell membranes is a parameter of biological interest and can be measured by diffusion magnetic resonance imaging (dMRI). In this work, we investigate a stochastic model for the diffusion-and-exchange of water molecules. This model provides a general solution for the temporal evolution of dMRI signal using any type of gradient waveform, thereby generalizing the signal expressions for the Kärger model. Moreover, we also derive a general nth order cumulant expansion of the dMRI signal accounting for water exchange, which has not been explored in earlier studies. Based on this analytical expression, we compute the cumulant expansion for dMRI signals for the special case of single diffusion encoding (SDE) and double diffusion encoding (DDE) sequences. Our results provide a theoretical guideline on optimizing experimental parameters for SDE and DDE sequences, respectively. Moreover, we show that DDE signals are more sensitive to water exchange at short-time scale but provide less attenuation at long-time scale than SDE signals. Our theoretical analysis is also validated using Monte Carlo simulations on synthetic structures.

  12. A Cumulant-based Analysis of Nonlinear Magnetospheric Dynamics

    International Nuclear Information System (INIS)

    Johnson, Jay R.; Wing, Simon

    2004-01-01

    Understanding magnetospheric dynamics and predicting future behavior of the magnetosphere is of great practical interest because it could potentially help to avert catastrophic loss of power and communications. In order to build good predictive models it is necessary to understand the most critical nonlinear dependencies among observed plasma and electromagnetic field variables in the coupled solar wind/magnetosphere system. In this work, we apply a cumulant-based information dynamical measure to characterize the nonlinear dynamics underlying the time evolution of the Dst and Kp geomagnetic indices, given solar wind magnetic field and plasma input. We examine the underlying dynamics of the system, the temporal statistical dependencies, the degree of nonlinearity, and the rate of information loss. We find a significant solar cycle dependence in the underlying dynamics of the system with greater nonlinearity for solar minimum. The cumulant-based approach also has the advantage that it is reliable even in the case of small data sets and therefore it is possible to avoid the assumption of stationarity, which allows for a measure of predictability even when the underlying system dynamics may change character. Evaluations of several leading Kp prediction models indicate that their performances are sub-optimal during active times. We discuss possible improvements of these models based on this nonparametric approach

  13. Strategy for an assessment of cumulative ecological impacts

    International Nuclear Information System (INIS)

    Boucher, P.; Collins, J.; Nelsen, J.

    1995-01-01

    The US Department of Energy (DOE) has developed a strategy to conduct an assessment of the cumulative ecological impact of operations at the 300-square-mile Savannah River Site. This facility has over 400 identified waste units and contains several large watersheds. In addition to individual waste units, residual contamination must be evaluated in terms of its contribution to ecological risks at zonal and site-wide levels. DOE must be able to generate sufficient information to facilitate cleanup in the immediate future within the context of a site-wide ecological risk assessment that may not be completed for many years. The strategy superimposes a more global perspective on ecological assessments of individual waste units and provides strategic underpinnings for conducting individual screening-level and baseline risk assessments at the operable unit and zonal or watershed levels. It identifies ecological endpoints and risk assessment tools appropriate for each level of the risk assessment. In addition, it provides a clear mechanism for identifying clean sites through screening-level risk assessments and for elevating sites with residual contamination to the next level of assessment. Whereas screening-level and operable unit-level risk assessments relate directly to cleanup, zonal and site-wide assessments verity or confirm the overall effectiveness of remediation. The latter assessments must show, for example, whether multiple small areas with residual pesticide contamination that have minimal individual impact would pose a cumulative risk from bioaccumulation because they are within the habitat range of an ecological receptor

  14. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  15. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  16. Optimal Portfolio Strategy under Rolling Economic Maximum Drawdown Constraints

    Directory of Open Access Journals (Sweden)

    Xiaojian Yu

    2014-01-01

    Full Text Available This paper deals with the problem of optimal portfolio strategy under the constraints of rolling economic maximum drawdown. A more practical strategy is developed by using rolling Sharpe ratio in computing the allocation proportion in contrast to existing models. Besides, another novel strategy named “REDP strategy” is further proposed, which replaces the rolling economic drawdown of the portfolio with the rolling economic drawdown of the risky asset. The simulation tests prove that REDP strategy can ensure the portfolio to satisfy the drawdown constraint and outperforms other strategies significantly. An empirical comparison research on the performances of different strategies is carried out by using the 23-year monthly data of SPTR, DJUBS, and 3-month T-bill. The investment cases of single risky asset and two risky assets are both studied in this paper. Empirical results indicate that the REDP strategy successfully controls the maximum drawdown within the given limit and performs best in both return and risk.

  17. Temperature diagnostic line ratios of Fe XVII

    International Nuclear Information System (INIS)

    Raymond, J.C.; Smith, B.W.; Los Alamos National Lab., NM)

    1986-01-01

    Based on extensive calculations of the excitation rates of Fe XVII, four temperature-sensitive line ratios are investigated, paying special attention to the contribution of resonances to the excitation rates and to the contributions of dielectronic recombination satellites to the observed line intensities. The predictions are compared to FPCS observations of Puppis A and to Solar Maximum Mission (SMM) and SOLEX observations of the sun. Temperature-sensitive line ratios are also computed for emitting gas covering a broad temperature range. It is found that each ratio yields a differently weighted average for the temperature and that this accounts for some apparent discrepancies between the theoretical ratios and solar observations. The effects of this weighting on the Fe XVII temperature diagnostics and on the analogous Fe XXIV/Fe XXV satellite line temperature diagnostics are discussed. 27 references

  18. An inflammation-based cumulative prognostic score system in patients with diffuse large B cell lymphoma in rituximab era.

    Science.gov (United States)

    Sun, Feifei; Zhu, Jia; Lu, Suying; Zhen, Zijun; Wang, Juan; Huang, Junting; Ding, Zonghui; Zeng, Musheng; Sun, Xiaofei

    2018-01-02

    Systemic inflammatory parameters are associated with poor outcomes in malignant patients. Several inflammation-based cumulative prognostic score systems were established for various solid tumors. However, there is few inflammation based cumulative prognostic score system for patients with diffuse large B cell lymphoma (DLBCL). We retrospectively reviewed 564 adult DLBCL patients who had received rituximab, cyclophosphamide, doxorubicin, vincristine and prednisolone (R-CHOP) therapy between Nov 1 2006 and Dec 30 2013 and assessed the prognostic significance of six systemic inflammatory parameters evaluated in previous studies by univariate and multivariate analysis:C-reactive protein(CRP), albumin levels, the lymphocyte-monocyte ratio (LMR), the neutrophil-lymphocyte ratio(NLR), the platelet-lymphocyte ratio(PLR)and fibrinogen levels. Multivariate analysis identified CRP, albumin levels and the LMR are three independent prognostic parameters for overall survival (OS). Based on these three factors, we constructed a novel inflammation-based cumulative prognostic score (ICPS) system. Four risk groups were formed: group ICPS = 0, ICPS = 1, ICPS = 2 and ICPS = 3. Advanced multivariate analysis indicated that the ICPS model is a prognostic score system independent of International Prognostic Index (IPI) for both progression-free survival (PFS) (p systemic inflammatory status was associated with clinical outcomes of patients with DLBCL in rituximab era. The ICPS model was shown to classify risk groups more accurately than any single inflammatory prognostic parameters. These findings may be useful for identifying candidates for further inflammation-related mechanism research or novel anti-inflammation target therapies.

  19. The cumulative effect of smoking at age 50, 60, and 70 on functional ability at age 75

    DEFF Research Database (Denmark)

    Støvring, Nina; Avlund, Kirsten; Schultz-Larsen, Kirsten

    2004-01-01

    of accumulating the smoking habits over the examinations. Cumulated former smokers have a larger risk of having reduced functional ability at age 75 (OR: 1.35 (1.13-1.61)) compared with never smokers. The odds ratios of reduced functional ability were 2.46 (1.44-4.17) among cumulated smokers of 1-14 grams......AIMS: As elderly people form a steadily growing part of the population in most parts of the world we are in need of knowledge of the influence of modifiable lifestyle factors on functional ability late in life. This study aims to examine the cumulative impact of smoking from age 50 to 70...... on functional ability at age 75. METHODS: 387 men and women born in 1914 and living in seven municipalities in the western part of the County of Copenhagen were followed for 25 years with examinations in 1964, 1974, 1984, and 1989. Associations between smoking and functional ability were examined using multiple...

  20. The role of cumulative physical work load in symptomatic knee osteoarthritis – a case-control study in Germany

    Directory of Open Access Journals (Sweden)

    Abolmaali Nasreddin

    2008-07-01

    Full Text Available Abstract Objectives To examine the dose-response relationship between cumulative exposure to kneeling and squatting as well as to lifting and carrying of loads and symptomatic knee osteoarthritis (OA in a population-based case-control study. Methods In five orthopedic clinics and five practices we recruited 295 male patients aged 25 to 70 with radiographically confirmed knee osteoarthritis associated with chronic complaints. A total of 327 male control subjects were recruited. Data were gathered in a structured personal interview. To calculate cumulative exposure, the self-reported duration of kneeling and squatting as well as the duration of lifting and carrying of loads were summed up over the entire working life. Results The results of our study support a dose-response relationship between kneeling/squatting and symptomatic knee osteoarthritis. For a cumulative exposure to kneeling and squatting > 10.800 hours, the risk of having radiographically confirmed knee osteoarthritis as measured by the odds ratio (adjusted for age, region, weight, jogging/athletics, and lifting or carrying of loads is 2.4 (95% CI 1.1–5.0 compared to unexposed subjects. Lifting and carrying of loads is significantly associated with knee osteoarthritis independent of kneeling or similar activities. Conclusion As the knee osteoarthritis risk is strongly elevated in occupations that involve both kneeling/squatting and heavy lifting/carrying, preventive efforts should particularly focus on these "high-risk occupations".

  1. The rectilinear Steiner ratio

    Directory of Open Access Journals (Sweden)

    PO de Wet

    2005-06-01

    Full Text Available The rectilinear Steiner ratio was shown to be 3/2 by Hwang [Hwang FK, 1976, On Steiner minimal trees with rectilinear distance, SIAM Journal on Applied Mathematics, 30, pp. 104– 114.]. We use continuity and introduce restricted point sets to obtain an alternative, short and self-contained proof of this result.

  2. Cumulative effects in Swedish EIA practice - difficulties and obstacles

    International Nuclear Information System (INIS)

    Waernbaeck, Antoienette; Hilding-Rydevik, Tuija

    2009-01-01

    The importance of considering cumulative effects (CE) in the context of environmental assessment is manifested in the EU regulations. The demands on the contents of Environmental Impact Assessment (EIA) and Strategic Environmental Assessment (SEA) documents explicitly ask for CE to be described. In Swedish environmental assessment documents CE are rarely described or included. The aim of this paper is to look into the reasons behind this fact in the Swedish context. The paper describes and analyse how actors implementing the EIA and SEA legislation in Sweden perceive the current situation in relation to the legislative demands and the inclusion of cumulative effects. Through semi-structured interviews the following questions have been explored: Is the phenomenon of CE discussed and included in the EIA/SEA process? What do the actors include in and what is their knowledge of the term and concept of CE? Which difficulties and obstacles do these actors experience and what possibilities for inclusion of CE do they see in the EIA/SEA process? A large number of obstacles and hindrances emerged from the interviews conducted. It can be concluded from the analysis that the will to act does seem to exist. A lack of knowledge in respect of how to include cumulative effects and a lack of clear regulations concerning how this should be done seem to be perceived as the main obstacles. The knowledge of the term and the phenomenon is furthermore quite narrow and not all encompassing. They experience that there is a lack of procedures in place. They also seem to lack knowledge of methods in relation to how to actually work, in practice, with CE and how to include CE in the EIA/SEA process. It can be stated that the existence of this poor picture in relation to practice concerning CE in the context of impact assessment mirrors the existing and so far rather vague demands in respect of the inclusion and assessment of CE in Swedish EIA and SEA legislation, regulations, guidelines and

  3. Technical Note: SCUDA: A software platform for cumulative dose assessment

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seyoun; McNutt, Todd; Quon, Harry; Wong, John; Lee, Junghoon, E-mail: rshekhar@childrensnational.org, E-mail: junghoon@jhu.edu [Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, Maryland 21231 (United States); Plishker, William [IGI Technologies, Inc., College Park, Maryland 20742 (United States); Shekhar, Raj, E-mail: rshekhar@childrensnational.org, E-mail: junghoon@jhu.edu [IGI Technologies, Inc., College Park, Maryland 20742 and Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Health System, Washington, DC 20010 (United States)

    2016-10-15

    Purpose: Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (SCUDA) that can be seamlessly integrated into the clinical workflow. Methods: SCUDA consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our image PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. Results: The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. Conclusions: The authors developed a unified software platform that provides

  4. Maximum Likelihood Joint Tracking and Association in Strong Clutter

    Directory of Open Access Journals (Sweden)

    Leonid I. Perlovsky

    2013-01-01

    Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non-combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague-to-crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly-cluttered scenarios and results in an orders-of-magnitude improvement in signal-to-clutter ratio.

  5. Maximum likelihood estimation of semiparametric mixture component models for competing risks data.

    Science.gov (United States)

    Choi, Sangbum; Huang, Xuelin

    2014-09-01

    In the analysis of competing risks data, the cumulative incidence function is a useful quantity to characterize the crude risk of failure from a specific event type. In this article, we consider an efficient semiparametric analysis of mixture component models on cumulative incidence functions. Under the proposed mixture model, latency survival regressions given the event type are performed through a class of semiparametric models that encompasses the proportional hazards model and the proportional odds model, allowing for time-dependent covariates. The marginal proportions of the occurrences of cause-specific events are assessed by a multinomial logistic model. Our mixture modeling approach is advantageous in that it makes a joint estimation of model parameters associated with all competing risks under consideration, satisfying the constraint that the cumulative probability of failing from any cause adds up to one given any covariates. We develop a novel maximum likelihood scheme based on semiparametric regression analysis that facilitates efficient and reliable estimation. Statistical inferences can be conveniently made from the inverse of the observed information matrix. We establish the consistency and asymptotic normality of the proposed estimators. We validate small sample properties with simulations and demonstrate the methodology with a data set from a study of follicular lymphoma. © 2014, The International Biometric Society.

  6. Cumulative trauma and symptom complexity in children: a path analysis.

    Science.gov (United States)

    Hodges, Monica; Godbout, Natacha; Briere, John; Lanktree, Cheryl; Gilbert, Alicia; Kletzka, Nicole Taylor

    2013-11-01

    Multiple trauma exposures during childhood are associated with a range of psychological symptoms later in life. In this study, we examined whether the total number of different types of trauma experienced by children (cumulative trauma) is associated with the complexity of their subsequent symptomatology, where complexity is defined as the number of different symptom clusters simultaneously elevated into the clinical range. Children's symptoms in six different trauma-related areas (e.g., depression, anger, posttraumatic stress) were reported both by child clients and their caretakers in a clinical sample of 318 children. Path analysis revealed that accumulated exposure to multiple different trauma types predicts symptom complexity as reported by both children and their caretakers. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Near-Field Source Localization Using a Special Cumulant Matrix

    Science.gov (United States)

    Cui, Han; Wei, Gang

    A new near-field source localization algorithm based on a uniform linear array was proposed. The proposed algorithm estimates each parameter separately but does not need pairing parameters. It can be divided into two important steps. The first step is bearing-related electric angle estimation based on the ESPRIT algorithm by constructing a special cumulant matrix. The second step is the other electric angle estimation based on the 1-D MUSIC spectrum. It offers much lower computational complexity than the traditional near-field 2-D MUSIC algorithm and has better performance than the high-order ESPRIT algorithm. Simulation results demonstrate that the performance of the proposed algorithm is close to the Cramer-Rao Bound (CRB).

  8. Cumulative growth of minor hysteresis loops in the Kolmogorov model

    International Nuclear Information System (INIS)

    Meilikhov, E. Z.; Farzetdinova, R. M.

    2013-01-01

    The phenomenon of nonrepeatability of successive remagnetization cycles in Co/M (M = Pt, Pd, Au) multilayer film structures is explained in the framework of the Kolmogorov crystallization model. It is shown that this model of phase transitions can be adapted so as to adequately describe the process of magnetic relaxation in the indicated systems with “memory.” For this purpose, it is necessary to introduce some additional elements into the model, in particular, (i) to take into account the fact that every cycle starts from a state “inherited” from the preceding cycle and (ii) to assume that the rate of growth of a new magnetic phase depends on the cycle number. This modified model provides a quite satisfactory qualitative and quantitative description of all features of successive magnetic relaxation cycles in the system under consideration, including the surprising phenomenon of cumulative growth of minor hysteresis loops.

  9. Cumulative protons in 12C fragmentation at intermediate energy

    International Nuclear Information System (INIS)

    Abramov, B.M.; Alekseev, P.N.; Borodin, Y.A.; Bulychjov, S.A.; Dukhovskoi, I.A.; Khanov, A.I.; Krutenkova, A.P.; Kulikov, V.V.; Martemianov, M.A.; Matsuk, M.A.; Turdakina, E.N.

    2014-01-01

    In the FRAGM experiment at heavy ion accelerator complex TWAC-ITEP, the proton yields at an angle 3.5 degrees have been measured in fragmentation of carbon ions at T 0 equals 0.3, 0.6, 0.95 and 2.0 GeV/nucleon on beryllium target. The data are presented as invariant proton yields on cumulative variable x in the range 0.9 < x < 2.4. Proton spectra cover six orders of invariant cross section magnitude. They have been analyzed in the framework of quark cluster fragmentation model. Fragmentation functions of quark- gluon string model are used. The probabilities of the existence of multi-quark clusters in carbon nuclei are estimated to be 8 - 12% for six-quark clusters and 0.2 - 0.6% for nine- quark clusters. (authors)

  10. Ratcheting up the ratchet: on the evolution of cumulative culture.

    Science.gov (United States)

    Tennie, Claudio; Call, Josep; Tomasello, Michael

    2009-08-27

    Some researchers have claimed that chimpanzee and human culture rest on homologous cognitive and learning mechanisms. While clearly there are some homologous mechanisms, we argue here that there are some different mechanisms at work as well. Chimpanzee cultural traditions represent behavioural biases of different populations, all within the species' existing cognitive repertoire (what we call the 'zone of latent solutions') that are generated by founder effects, individual learning and mostly product-oriented (rather than process-oriented) copying. Human culture, in contrast, has the distinctive characteristic that it accumulates modifications over time (what we call the 'ratchet effect'). This difference results from the facts that (i) human social learning is more oriented towards process than product and (ii) unique forms of human cooperation lead to active teaching, social motivations for conformity and normative sanctions against non-conformity. Together, these unique processes of social learning and cooperation lead to humans' unique form of cumulative cultural evolution.

  11. EXPERIMENTAL VALIDATION OF CUMULATIVE SURFACE LOCATION ERROR FOR TURNING PROCESSES

    Directory of Open Access Journals (Sweden)

    Adam K. Kiss

    2016-02-01

    Full Text Available The aim of this study is to create a mechanical model which is suitable to investigate the surface quality in turning processes, based on the Cumulative Surface Location Error (CSLE, which describes the series of the consecutive Surface Location Errors (SLE in roughing operations. In the established model, the investigated CSLE depends on the currently and the previously resulted SLE by means of the variation of the width of cut. The phenomenon of the system can be described as an implicit discrete map. The stationary Surface Location Error and its bifurcations were analysed and flip-type bifurcation was observed for CSLE. Experimental verification of the theoretical results was carried out.

  12. Ratcheting up the ratchet: on the evolution of cumulative culture

    Science.gov (United States)

    Tennie, Claudio; Call, Josep; Tomasello, Michael

    2009-01-01

    Some researchers have claimed that chimpanzee and human culture rest on homologous cognitive and learning mechanisms. While clearly there are some homologous mechanisms, we argue here that there are some different mechanisms at work as well. Chimpanzee cultural traditions represent behavioural biases of different populations, all within the species’ existing cognitive repertoire (what we call the ‘zone of latent solutions’) that are generated by founder effects, individual learning and mostly product-oriented (rather than process-oriented) copying. Human culture, in contrast, has the distinctive characteristic that it accumulates modifications over time (what we call the ‘ratchet effect’). This difference results from the facts that (i) human social learning is more oriented towards process than product and (ii) unique forms of human cooperation lead to active teaching, social motivations for conformity and normative sanctions against non-conformity. Together, these unique processes of social learning and cooperation lead to humans’ unique form of cumulative cultural evolution. PMID:19620111

  13. Cumulative neutrino background from quasar-driven outflows

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xiawei; Loeb, Abraham, E-mail: xiawei.wang@cfa.harvard.edu, E-mail: aloeb@cfa.harvard.edu [Department of Astronomy, Harvard University, 60 Garden Street, Cambridge, MA 02138 (United States)

    2016-12-01

    Quasar-driven outflows naturally account for the missing component of the extragalactic γ-ray background through neutral pion production in interactions between protons accelerated by the forward outflow shock and interstellar protons. We study the simultaneous neutrino emission by the same protons. We adopt outflow parameters that best fit the extragalactic γ-ray background data and derive a cumulative neutrino background of ∼ 10{sup −7} GeV cm{sup −2} s{sup −1} sr{sup −1} at neutrino energies E {sub ν} ∼> 10 TeV, which naturally explains the most recent IceCube data without tuning any free parameters. The link between the γ-ray and neutrino emission from quasar outflows can be used to constrain the high-energy physics of strong shocks at cosmological distances.

  14. Using Fuzzy Probability Weights in Cumulative Prospect Theory

    Directory of Open Access Journals (Sweden)

    Užga-Rebrovs Oļegs

    2016-12-01

    Full Text Available During the past years, a rapid growth has been seen in the descriptive approaches to decision choice. As opposed to normative expected utility theory, these approaches are based on the subjective perception of probabilities by the individuals, which takes place in real situations of risky choice. The modelling of this kind of perceptions is made on the basis of probability weighting functions. In cumulative prospect theory, which is the focus of this paper, decision prospect outcome weights are calculated using the obtained probability weights. If the value functions are constructed in the sets of positive and negative outcomes, then, based on the outcome value evaluations and outcome decision weights, generalised evaluations of prospect value are calculated, which are the basis for choosing an optimal prospect.

  15. Modelling the evolution and diversity of cumulative culture

    Science.gov (United States)

    Enquist, Magnus; Ghirlanda, Stefano; Eriksson, Kimmo

    2011-01-01

    Previous work on mathematical models of cultural evolution has mainly focused on the diffusion of simple cultural elements. However, a characteristic feature of human cultural evolution is the seemingly limitless appearance of new and increasingly complex cultural elements. Here, we develop a general modelling framework to study such cumulative processes, in which we assume that the appearance and disappearance of cultural elements are stochastic events that depend on the current state of culture. Five scenarios are explored: evolution of independent cultural elements, stepwise modification of elements, differentiation or combination of elements and systems of cultural elements. As one application of our framework, we study the evolution of cultural diversity (in time as well as between groups). PMID:21199845

  16. Optimal execution with price impact under Cumulative Prospect Theory

    Science.gov (United States)

    Zhao, Jingdong; Zhu, Hongliang; Li, Xindan

    2018-01-01

    Optimal execution of a stock (or portfolio) has been widely studied in academia and in practice over the past decade, and minimizing transaction costs is a critical point. However, few researchers consider the psychological factors for the traders. What are traders truly concerned with - buying low in the paper accounts or buying lower compared to others? We consider the optimal trading strategies in terms of the price impact and Cumulative Prospect Theory and identify some specific properties. Our analyses indicate that a large proportion of the execution volume is distributed at both ends of the transaction time. But the trader's optimal strategies may not be implemented at the same transaction size and speed in different market environments.

  17. Practical management of cumulative anthropogenic impacts with working marine examples

    DEFF Research Database (Denmark)

    Kyhn, Line Anker; Wright, Andrew J.

    2014-01-01

    for petroleum. Human disturbances, including the noise almost ubiquitously associated with human activity, are likely to increase the incidence, magnitude, and duration of adverse effects on marine life, including stress responses. Stress responses have the potential to induce fitness consequences...... on impact can be facilitated through implementation of regular application cycles for project authorization or improved programmatic and aggregated impact assessments that simultaneously consider multiple projects. Cross-company collaborations and a better incorporation of uncertainty into decision making...... could also help limit, if not reduce, cumulative impacts of multiple human activities. These simple management steps may also form the basis of a rudimentary form of marine spatial planning and could be used in support of future ecosystem-based management efforts....

  18. Practical management of cumulative anthropogenic impacts with working marine examples.

    Science.gov (United States)

    Wright, Andrew J; Kyhn, Line A

    2015-04-01

    Human pressure on the environment is expanding and intensifying, especially in coastal and offshore areas. Major contributors to this are the current push for offshore renewable energy sources, which are thought of as environmentally friendly sources of power, as well as the continued demand for petroleum. Human disturbances, including the noise almost ubiquitously associated with human activity, are likely to increase the incidence, magnitude, and duration of adverse effects on marine life, including stress responses. Stress responses have the potential to induce fitness consequences for individuals, which add to more obvious directed takes (e.g., hunting or fishing) to increase the overall population-level impact. To meet the requirements of marine spatial planning and ecosystem-based management, many efforts are ongoing to quantify the cumulative impacts of all human actions on marine species or populations. Meanwhile, regulators face the challenge of managing these accumulating and interacting impacts with limited scientific guidance. We believe there is scientific support for capping the level of impact for (at a minimum) populations in decline or with unknown statuses. This cap on impact can be facilitated through implementation of regular application cycles for project authorization or improved programmatic and aggregated impact assessments that simultaneously consider multiple projects. Cross-company collaborations and a better incorporation of uncertainty into decision making could also help limit, if not reduce, cumulative impacts of multiple human activities. These simple management steps may also form the basis of a rudimentary form of marine spatial planning and could be used in support of future ecosystem-based management efforts. © 2014 Society for Conservation Biology.

  19. County-level cumulative environmental quality associated with cancer incidence.

    Science.gov (United States)

    Jagai, Jyotsna S; Messer, Lynne C; Rappazzo, Kristen M; Gray, Christine L; Grabich, Shannon C; Lobdell, Danelle T

    2017-08-01

    Individual environmental exposures are associated with cancer development; however, environmental exposures occur simultaneously. The Environmental Quality Index (EQI) is a county-level measure of cumulative environmental exposures that occur in 5 domains. The EQI was linked to county-level annual age-adjusted cancer incidence rates from the Surveillance, Epidemiology, and End Results (SEER) Program state cancer profiles. All-site cancer and the top 3 site-specific cancers for male and female subjects were considered. Incident rate differences (IRDs; annual rate difference per 100,000 persons) and 95% confidence intervals (CIs) were estimated using fixed-slope, random intercept multilevel linear regression models. Associations were assessed with domain-specific indices and analyses were stratified by rural/urban status. Comparing the highest quintile/poorest environmental quality with the lowest quintile/best environmental quality for overall EQI, all-site county-level cancer incidence rate was positively associated with poor environmental quality overall (IRD, 38.55; 95% CI, 29.57-47.53) and for male (IRD, 32.60; 95% CI, 16.28-48.91) and female (IRD, 30.34; 95% CI, 20.47-40.21) subjects, indicating a potential increase in cancer incidence with decreasing environmental quality. Rural/urban stratified models demonstrated positive associations comparing the highest with the lowest quintiles for all strata, except the thinly populated/rural stratum and in the metropolitan/urbanized stratum. Prostate and breast cancer demonstrated the strongest positive associations with poor environmental quality. We observed strong positive associations between the EQI and all-site cancer incidence rates, and associations differed by rural/urban status and environmental domain. Research focusing on single environmental exposures in cancer development may not address the broader environmental context in which cancers develop, and future research should address cumulative environmental

  20. Economic and policy implications of the cumulative carbon budget

    Science.gov (United States)

    Allen, M. R.; Otto, F. E. L.; Otto, A.; Hepburn, C.

    2014-12-01

    The importance of cumulative carbon emissions in determining long-term risks of climate change presents considerable challenges to policy makers. The traditional notion of "total CO2-equivalent emissions", which forms the backbone of agreements such as the Kyoto Protocol and the European Emissions Trading System, is fundamentally flawed. Measures to reduce short-lived climate pollutants benefit the current generation, while measures to reduce long-lived climate pollutants benefit future generations, so there is no sense in which they can ever be considered equivalent. Debates over the correct metric used to compute CO2-equivalence are thus entirely moot: both long-lived and short-lived emissions will need to be addressed if all generations are to be protected from dangerous climate change. As far as long-lived climate pollutants are concerned, the latest IPCC report highlights the overwhelming importance of carbon capture and storage in determining the cost of meeting the goal of limiting anthropogenic warming to two degrees. We will show that this importance arises directly from the cumulative carbon budget and the role of CCS as the technology of last resort before economic activity needs to be restricted to meet ambitious climate targets. It highlights the need to increase the rate of CCS deployment by orders of magnitude if the option of avoiding two degrees is to be retained. The difficulty of achieving this speed of deployment through conventional incentives and carbon-pricing mechanisms suggests a need for a much more direct mandatory approach. Despite their theoretical economic inefficiency, the success of recent regulatory measures in achieving greenhouse gas emissions reductions in jurisdictions such as the United States suggests an extension of the regulatory approach could be a more effective and politically acceptable means of achieving adequately rapid CCS deployment than conventional carbon taxes or cap-and-trade systems.

  1. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  2. Cumulative radiation dose caused by radiologic studies in critically ill trauma patients.

    Science.gov (United States)

    Kim, Patrick K; Gracias, Vicente H; Maidment, Andrew D A; O'Shea, Michael; Reilly, Patrick M; Schwab, C William

    2004-09-01

    Critically ill trauma patients undergo many radiologic studies, but the cumulative radiation dose is unknown. The purpose of this study was to estimate the cumulative effective dose (CED) of radiation resulting from radiologic studies in critically ill trauma patients. The study group was composed of trauma patients at an urban Level I trauma center with surgical intensive care unit length of stay (LOS) greater than 30 days. The radiology records were reviewed. A typical effective dose per study for each type of plain film radiograph, computed tomographic scan, fluoroscopic study, and nuclear medicine study was used to calculate CED. Forty-six patients met criteria. The mean surgical intensive care unit and hospital LOS were 42.7 +/- 14.0 and 59.5 +/- 28.5 days, respectively. The mean Injury Severity Score was 32.2 +/- 15.0. The mean number of studies per patient was 70.1 +/- 29.0 plain film radiographs, 7.8 +/- 4.1 computed tomographic scans, 2.5 +/- 2.6 fluoroscopic studies, and 0.065 +/- 0.33 nuclear medicine study. The mean CED was 106 +/- 59 mSv per patient (range, 11-289 mSv; median, 104 mSv). Among age, mechanism, Injury Severity Score, and LOS, there was no statistically significant predictor of high CED. The mean CED in the study group was 30 times higher than the average yearly radiation dose from all sources for individuals in the United States. The theoretical additional morbidity attributable to radiologic studies was 0.78%. From a radiobiologic perspective, risk-to-benefit ratios of radiologic studies are favorable, given the importance of medical information obtained. Current practice patterns regarding use of radiologic studies appear to be acceptable.

  3. Depressive Symptoms in College Women: Examining the Cumulative Effect of Childhood and Adulthood Domestic Violence.

    Science.gov (United States)

    Al-Modallal, Hanan

    2016-10-01

    The purpose of this study was to examine the cumulative effect of childhood and adulthood violence on depressive symptoms in a sample of Jordanian college women. Snowball sampling technique was used to recruit the participants. The participants were heterosexual college-aged women between the ages of 18 and 25. The participants were asked about their experiences of childhood violence (including physical violence, sexual violence, psychological violence, and witnessing parental violence), partner violence (including physical partner violence and sexual partner violence), experiences of depressive symptoms, and about other demographic and familial factors as possible predictors for their complaints of depressive symptoms. Multiple linear regression analysis was implemented to identify demographic- and violence-related predictors of their complainants of depressive symptoms. Logistic regression analysis was further performed to identify possible type(s) of violence associated with the increased risk of depressive symptoms. The prevalence of depressive symptoms in this sample was 47.4%. For the violence experience, witnessing parental violence was the most common during childhood, experienced by 40 (41.2%) women, and physical partner violence was the most common in adulthood, experienced by 35 (36.1%) women. Results of logistic regression analysis indicated that experiencing two types of violence (regardless of the time of occurrence) was significant in predicting depressive symptoms (odds ratio [OR] = 3.45, p < .05). Among college women's demographic characteristics, marital status (single vs. engaged), mothers' level of education, income, and smoking were significant in predicting depressive symptoms. Assessment of physical violence and depressive symptoms including the cumulative impact of longer periods of violence on depressive symptoms is recommended to be explored in future studies. © The Author(s) 2015.

  4. Cumulative toxicity of neonicotinoid insecticide mixtures to Chironomus dilutus under acute exposure scenarios.

    Science.gov (United States)

    Maloney, Erin M; Morrissey, Christy A; Headley, John V; Peru, Kerry M; Liber, Karsten

    2017-11-01

    Extensive agricultural use of neonicotinoid insecticide products has resulted in the presence of neonicotinoid mixtures in surface waters worldwide. Although many aquatic insect species are known to be sensitive to neonicotinoids, the impact of neonicotinoid mixtures is poorly understood. In the present study, the cumulative toxicities of binary and ternary mixtures of select neonicotinoids (imidacloprid, clothianidin, and thiamethoxam) were characterized under acute (96-h) exposure scenarios using the larval midge Chironomus dilutus as a representative aquatic insect species. Using the MIXTOX approach, predictive parametric models were fitted and statistically compared with observed toxicity in subsequent mixture tests. Single-compound toxicity tests yielded median lethal concentration (LC50) values of 4.63, 5.93, and 55.34 μg/L for imidacloprid, clothianidin, and thiamethoxam, respectively. Because of the similar modes of action of neonicotinoids, concentration-additive cumulative mixture toxicity was the predicted model. However, we found that imidacloprid-clothianidin mixtures demonstrated response-additive dose-level-dependent synergism, clothianidin-thiamethoxam mixtures demonstrated concentration-additive synergism, and imidacloprid-thiamethoxam mixtures demonstrated response-additive dose-ratio-dependent synergism, with toxicity shifting from antagonism to synergism as the relative concentration of thiamethoxam increased. Imidacloprid-clothianidin-thiamethoxam ternary mixtures demonstrated response-additive synergism. These results indicate that, under acute exposure scenarios, the toxicity of neonicotinoid mixtures to C. dilutus cannot be predicted using the common assumption of additive joint activity. Indeed, the overarching trend of synergistic deviation emphasizes the need for further research into the ecotoxicological effects of neonicotinoid insecticide mixtures in field settings, the development of better toxicity models for neonicotinoid mixture

  5. Cumulative Effect of Obesogenic Behaviours on Adiposity in Spanish Children and Adolescents

    Science.gov (United States)

    Schröder, Helmut; Bawaked, Rowaedh Ahmed; Ribas-Barba, Lourdes; Izquierdo-Pulido, Maria; Roman-Viñas, Blanca; Fíto, Montserrat; Serra-Majem, Lluis

    2018-01-01

    Objective Little is known about the cumulative effect of obesogenic behaviours on childhood obesity risk. We determined the cumulative effect on BMI z-score, waist-to-height ratio (WHtR), overweight and abdominal obesity of four lifestyle behaviours that have been linked to obesity. Methods In this cross-sectional analysis, data were obtained from the EnKid sudy, a representative sample of Spanish youth. The study included 1,614 boys and girls aged 5-18 years. Weight, height and waist circumference were measured. Physical activity (PA), screen time, breakfast consumption and meal frequency were self-reported on structured questionnaires. Obesogenic behaviours were defined as 1 SD from the mean of the WHO reference population. Abdominal obesity was defined as a WHtR ≥ 0.5. Results High screen time was the most prominent obesogenic behaviour (49.7%), followed by low physical activity (22.4%), low meal frequency (14.4%), and skipping breakfast (12.5%). Although 33% of participants were free of all 4 obesogenic behaviours, 1, 2, and 3 or 4 behaviours were reported by 44.5%, 19.3%, and 5.0%, respectively. BMI z-score and WHtR were positively associated (p < 0.001) with increasing numbers of concurrent obesogenic behaviours. The odds of presenting with obesogenic behaviours were significantly higher in children who were overweight (OR 2.68; 95% CI 1.50; 4.80) or had abdominal obesity (OR 2.12; 95% CI 1.28; 3.52); they reported more than 2 obesogenic behaviours. High maternal and parental education was inversely associated (p = 0.004 and p < 0.001, respectively) with increasing presence of obesogenic behaviours. Surrogate markers of adiposity increased with numbers of concurrent presence of obesogenic behaviours. The opposite was true for high maternal and paternal education. PMID:29207394

  6. Cumulative exposure to childhood stressors and subsequent psychological distress. An analysis of US panel data.

    Science.gov (United States)

    Björkenstam, Emma; Burström, Bo; Brännström, Lars; Vinnerljung, Bo; Björkenstam, Charlotte; Pebley, Anne R

    2015-10-01

    Research has shown that childhood stress increases the risk of poor mental health later in life. We examined the effect of childhood stressors on psychological distress and self-reported depression in young adulthood. Data were obtained from the Child Development Supplement (CDS) to the national Panel Study of Income Dynamics (PSID), a survey of US families that incorporates data from parents and their children. In 2005 and 2007, the Panel Study of Income Dynamics was supplemented with two waves of Transition into Adulthood (TA) data drawn from a national sample of young adults, 18-23 years old. This study included data from participants in the CDS and the TA (n = 2128), children aged 4-13 at baseline. Data on current psychological distress was used as an outcome variable in logistic regressions, calculated as odds ratios (OR) with 95% confidence intervals (CI). Latent Class Analyses were used to identify clusters based on the different childhood stressors. Associations were observed between cumulative exposure to childhood stressors and both psychological distress and self-reported depression. Individuals being exposed to three or more stressors had the highest risk (crude OR for psychological distress: 2.49 (95% CI: 1.16-5.33), crude OR for self-reported depression: 2.07 (95% CI: 1.15-3.71). However, a large part was explained by adolescent depressive symptoms. Findings support the long-term negative impact of cumulative exposure to childhood stress on psychological distress. The important role of adolescent depression in this association also needs to be taken into consideration in future studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Percent relative cumulative frequency analysis in indirect calorimetry: application to studies of transgenic mice.

    Science.gov (United States)

    Riachi, Marc; Himms-Hagen, Jean; Harper, Mary-Ellen

    2004-12-01

    Indirect calorimetry is commonly used in research and clinical settings to assess characteristics of energy expenditure. Respiration chambers in indirect calorimetry allow measurements over long periods of time (e.g., hours to days) and thus the collection of large sets of data. Current methods of data analysis usually involve the extraction of only a selected small proportion of data, most commonly the data that reflects resting metabolic rate. Here, we describe a simple quantitative approach for the analysis of large data sets that is capable of detecting small differences in energy metabolism. We refer to it as the percent relative cumulative frequency (PRCF) approach and have applied it to the study of uncoupling protein-1 (UCP1) deficient and control mice. The approach involves sorting data in ascending order, calculating their cumulative frequency, and expressing the frequencies in the form of percentile curves. Results demonstrate the sensitivity of the PRCF approach for analyses of oxygen consumption (.VO2) as well as respiratory exchange ratio data. Statistical comparisons of PRCF curves are based on the 50th percentile values and curve slopes (H values). The application of the PRCF approach revealed that energy expenditure in UCP1-deficient mice housed and studied at room temperature (24 degrees C) is on average 10% lower (p lower environmental temperature, there were no differences in .VO2 between groups. The latter is likely due to augmented shivering thermogenesis in UCP1-deficient mice compared with controls. With the increased availability of murine models of metabolic disease, indirect calorimetry is increasingly used, and the PRCF approach provides a novel and powerful means for data analysis.

  8. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  9. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  10. Shower maximum detector for SDC calorimetry

    International Nuclear Information System (INIS)

    Ernwein, J.

    1994-01-01

    A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

  11. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  12. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  13. Transformer ratio enhancement experiment

    International Nuclear Information System (INIS)

    Gai, W.; Power, J. G.; Kanareykin, A.; Neasheva, E.; Altmark, A.

    2004-01-01

    Recently, a multibunch scheme for efficient acceleration based on dielectric wakefield accelerator technology was outlined in J.G. Power, W. Gai, A. Kanareykin, X. Sun. PAC 2001 Proceedings, pp. 114-116, 2002. In this paper we present an experimental program for the design, development and demonstration of an Enhanced Transformer Ratio Dielectric Wakefield Accelerator (ETR-DWA). The principal goal is to increase the transformer ratio R, the parameter that characterizes the energy transfer efficiency from the accelerating structure to the accelerated electron beam. We present here an experimental design of a 13.625 GHz dielectric loaded accelerating structure, a laser multisplitter producing a ramped bunch train, and simulations of the bunch train parameters required. Experimental results of the accelerating structure bench testing and ramped pulsed train generation with the laser multisplitter are shown as well. Using beam dynamic simulations, we also obtain the focusing FODO lattice parameters

  14. Intake to Production Ratio

    DEFF Research Database (Denmark)

    Nazaroff, William; Weschler, Charles J.; Little, John C.

    2012-01-01

    BACKGROUND: Limited data are available to assess human exposure to thousands of chemicals currently in commerce. Information that relates human intake of a chemical to its production and use can help inform understanding of mechanisms and pathways that control exposure and support efforts...... to protect public health.OBJECTIVES: We introduce the intake-to-production ratio (IPR) as an economy-wide quantitative indicator of the extent to which chemical production results in human exposure.METHODS: The IPR was evaluated as the ratio of two terms: aggregate rate of chemical uptake in a human......(n-butyl) phthalate, 1,040 ppm for para-dichlorobenzene, 6,800 ppm for di(isobutyl) phthalate, 7,700 ppm for diethyl phthalate, and 8,000-24,000 ppm (range) for triclosan.CONCLUSION: The IPR is well suited as an aggregate metric of exposure intensity for characterizing population-level exposure to synthesized...

  15. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  16. Maximum speed of dewetting on a fiber

    NARCIS (Netherlands)

    Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed

  17. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  18. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  19. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  20. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  1. The maximum-entropy method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš; Schneider, M.

    2003-01-01

    Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003

  2. Achieving maximum sustainable yield in mixed fisheries

    NARCIS (Netherlands)

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna

    2017-01-01

    Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

  3. 5 CFR 534.203 - Maximum stipends.

    Science.gov (United States)

    2010-01-01

    ... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...

  4. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  5. Envera Variable Compression Ratio Engine

    Energy Technology Data Exchange (ETDEWEB)

    Charles Mendler

    2011-03-15

    the compression ratio can be raised (to as much as 18:1) providing high engine efficiency. It is important to recognize that for a well designed VCR engine cylinder pressure does not need to be higher than found in current production turbocharged engines. As such, there is no need for a stronger crankcase, bearings and other load bearing parts within the VCR engine. The Envera VCR mechanism uses an eccentric carrier approach to adjust engine compression ratio. The crankshaft main bearings are mounted in this eccentric carrier or 'crankshaft cradle' and pivoting the eccentric carrier 30 degrees adjusts compression ratio from 9:1 to 18:1. The eccentric carrier is made up of a casting that provides rigid support for the main bearings, and removable upper bearing caps. Oil feed to the main bearings transits through the bearing cap fastener sockets. The eccentric carrier design was chosen for its low cost and rigid support of the main bearings. A control shaft and connecting links are used to pivot the eccentric carrier. The control shaft mechanism features compression ratio lock-up at minimum and maximum compression ratio settings. The control shaft method of pivoting the eccentric carrier was selected due to its lock-up capability. The control shaft can be rotated by a hydraulic actuator or an electric motor. The engine shown in Figures 3 and 4 has a hydraulic actuator that was developed under the current program. In-line 4-cylinder engines are significantly less expensive than V engines because an entire cylinder head can be eliminated. The cost savings from eliminating cylinders and an entire cylinder head will notably offset the added cost of the VCR and supercharging. Replacing V6 and V8 engines with in-line VCR 4-cylinder engines will provide high fuel economy at low cost. Numerous enabling technologies exist which have the potential to increase engine efficiency. The greatest efficiency gains are realized when the right combination of advanced and new

  6. Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.

    Science.gov (United States)

    Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun

    2018-06-04

    Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.

  7. On the method of logarithmic cumulants for parametric probability density function estimation.

    Science.gov (United States)

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.

  8. The Reference Return Ratio

    DEFF Research Database (Denmark)

    Nicolaisen, Jeppe; Faber Frandsen, Tove

    2008-01-01

    The paper introduces a new journal impact measure called The Reference Return Ratio (3R). Unlike the traditional Journal Impact Factor (JIF), which is based on calculations of publications and citations, the new measure is based on calculations of bibliographic investments (references) and returns...... (citations). A comparative study of the two measures shows a strong relationship between the 3R and the JIF. Yet, the 3R appears to correct for citation habits, citation dynamics, and composition of document types - problems that typically are raised against the JIF. In addition, contrary to traditional...

  9. Potential support ratios

    DEFF Research Database (Denmark)

    Kjærgaard, Søren; Canudas-Romo, Vladimir

    2017-01-01

    The ‘prospective potential support ratio’ has been proposed by researchers as a measure that accurately quantifies the burden of ageing, by identifying the fraction of a population that has passed a certain measure of longevity, for example, 17 years of life expectancy. Nevertheless......, the prospective potential support ratio usually focuses on the current mortality schedule, or period life expectancy. Instead, in this paper we look at the actual mortality experienced by cohorts in a population, using cohort life tables. We analyse differences between the two perspectives using mortality models...

  10. Cumulative receipt of an anti-poverty tax credit for families did not impact tobacco smoking among parents.

    Science.gov (United States)

    Pega, Frank; Gilsanz, Paola; Kawachi, Ichiro; Wilson, Nick; Blakely, Tony

    2017-04-01

    The effect of anti-poverty tax credit interventions on tobacco consumption is unclear. Previous studies have estimated short-term effects, did not isolate the effects of cumulative dose of tax credits, produced conflicting results, and used methods with limited control for some time-varying confounders (e.g., those affected by prior treatment) and treatment regimen (i.e., study participants' tax credit receipt pattern over time). We estimated the longer-term, cumulative effect of New Zealand's Family Tax Credit (FTC) on tobacco consumption, using a natural experiment (administrative errors leading to exogenous variation in FTC receipt) and methods specifically for controlling confounding, reverse causation, and treatment regimen. We extracted seven waves (2002-2009) of the nationally representative Survey of Family, Income and Employment including 4404 working-age (18-65 years) parents in families. The exposure was the total numbers of years of receiving FTC. The outcomes were regular smoking and the average daily number of cigarettes usually smoked at wave 7. We estimated average treatment effects using inverse probability of treatment weighting and marginal structural modelling. Each additional year of receiving FTC affected neither the odds of regular tobacco smoking among all parents (odds ratio 1.02, 95% confidence interval 0.94-1.11), nor the number of cigarettes smoked among parents who smoked regularly (rate ratio 1.01, 95% confidence interval 0.99-1.03). We found no evidence for an association between the cumulative number of years of receiving an anti-poverty tax credit and tobacco smoking or consumption among parents. The assumptions of marginal structural modelling are quite demanding, and we therefore cannot rule out residual confounding. Nonetheless, our results suggest that tax credit programme participation will not increase tobacco consumption among poor parents, at least in this high-income country. Copyright © 2017 Elsevier Ltd. All rights

  11. Analysis of LDPE-ZnO-clay nanocomposites using novel cumulative rheological parameters

    Science.gov (United States)

    Kracalik, Milan

    2017-05-01

    Polymer nanocomposites exhibit complex rheological behaviour due to physical and also possibly chemical interactions between individual phases. Up to now, rheology of dispersive polymer systems has been usually described by evaluation of viscosity curve (shear thinning phenomenon), storage modulus curve (formation of secondary plateau) or plotting information about dumping behaviour (e.g. Van Gurp-Palmen-plot, comparison of loss factor tan δ). On the contrary to evaluation of damping behaviour, values of cot δ were calculated and called as "storage factor", analogically to loss factor. Then values of storage factor were integrated over specific frequency range and called as "cumulative storage factor". In this contribution, LDPE-ZnO-clay nanocomposites with different dispersion grades (physical networks) have been prepared and characterized by both conventional as well as novel analysis approach. Next to cumulative storage factor, further cumulative rheological parameters like cumulative complex viscosity, cumulative complex modulus or cumulative storage modulus have been introduced.

  12. Tau hadronic branching ratios

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Kneringer, E; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Roussarie, A; Schuller, J P; Schwindling, J; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, Z; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    From 64492 selected \\tau-pair events, produced at the Z^0 resonance, the measurement of the tau decays into hadrons from a global analysis using 1991, 1992 and 1993 ALEPH data is presented. Special emphasis is given to the reconstruction of photons and \\pi^0's, and the removal of fake photons. A detailed study of the systematics entering the \\pi^0 reconstruction is also given. A complete and consistent set of tau hadronic branching ratios is presented for 18 exclusive modes. Most measurements are more precise than the present world average. The new level of precision reached allows a stringent test of \\tau-\\mu universality in hadronic decays, g_\\tau/g_\\mu \\ = \\ 1.0013 \\ \\pm \\ 0.0095, and the first measurement of the vector and axial-vector contributions to the non-strange hadronic \\tau decay width: R_{\\tau ,V} \\ = \\ 1.788 \\ \\pm \\ 0.025 and R_{\\tau ,A} \\ = \\ 1.694 \\ \\pm \\ 0.027. The ratio (R_{\\tau ,V} - R_{\\tau ,A}) / (R_{\\tau ,V} + R_{\\tau ,A}), equal to (2.7 \\pm 1.3) \\ \\%, is a measure of the importance of Q...

  13. Cumulative Author Index for Soviet Laser Bibliographies Nos. 67-93, September 1983-February 1989

    Science.gov (United States)

    1990-02-01

    C) 0 00 I: Cumulative Author Index for Soviet Laser Bibliographies September 1983 - February 1989 A Defense S&T Intelligence Special Purpose Document...90 CUMULATIVE AUTHOR INDEX FOR SOVIET LASER BIBLIOGRAPHIES Nos. 67-93 SEPTEMBER 1983 - FEBRUARY 1989 Date of Report March 31, 19 Vice Director for...RECIPIENT’S CATALOG NUMBER DST-2700Z-001-90 4. TITLE (and Subtitle) S. TYPE OF REPORT & PERIOD COVERED CUMULATIVE AUTHOR INDEX FOR SOVIET LASER

  14. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  15. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  16. Fluid Overload and Cumulative Thoracostomy Output Are Associated With Surgical Site Infection After Pediatric Cardiothoracic Surgery.

    Science.gov (United States)

    Sochet, Anthony A; Nyhan, Aoibhinn; Spaeder, Michael C; Cartron, Alexander M; Song, Xiaoyan; Klugman, Darren; Brown, Anna T

    2017-08-01

    To determine the impact of cumulative, postoperative thoracostomy output, amount of bolus IV fluids and peak fluid overload on the incidence and odds of developing a deep surgical site infection following pediatric cardiothoracic surgery. A single-center, nested, retrospective, matched case-control study. A 26-bed cardiac ICU in a 303-bed tertiary care pediatric hospital. Cases with deep surgical site infection following cardiothoracic surgery were identified retrospectively from January 2010 through December 2013 and individually matched to controls at a ratio of 1:2 by age, gender, Risk Adjustment for Congenital Heart Surgery score, Society of Thoracic Surgeons-European Association for Cardiothoracic Surgery category, primary cardiac diagnosis, and procedure. None. Twelve cases with deep surgical site infection were identified and matched to 24 controls without detectable differences in perioperative clinical characteristics. Deep surgical site infection cases had larger thoracostomy output and bolus IV fluid volumes at 6, 24, and 48 hours postoperatively compared with controls. For every 1 mL/kg of thoracostomy output, the odds of developing a deep surgical site infection increase by 13%. By receiver operative characteristic curve analysis, a cutoff of 49 mL/kg of thoracostomy output at 48 hours best discriminates the development of deep surgical site infection (sensitivity 83%, specificity 83%). Peak fluid overload was greater in cases than matched controls (12.5% vs 6%; p operative characteristic curve analysis, a threshold value of 10% peak fluid overload was observed to identify deep surgical site infection (sensitivity 67%, specificity 79%). Conditional logistic regression of peak fluid overload greater than 10% on the development of deep surgical site infection yielded an odds ratio of 9.4 (95% CI, 2-46.2). Increased postoperative peak fluid overload and cumulative thoracostomy output were associated with deep surgical site infection after pediatric

  17. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  18. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  19. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  20. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  1. Maximum parsimony on subsets of taxa.

    Science.gov (United States)

    Fischer, Mareike; Thatte, Bhalchandra D

    2009-09-21

    In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.

  2. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  3. High selection pressure promotes increase in cumulative adaptive culture.

    Directory of Open Access Journals (Sweden)

    Carolin Vegvari

    Full Text Available The evolution of cumulative adaptive culture has received widespread interest in recent years, especially the factors promoting its occurrence. Current evolutionary models suggest that an increase in population size may lead to an increase in cultural complexity via a higher rate of cultural transmission and innovation. However, relatively little attention has been paid to the role of natural selection in the evolution of cultural complexity. Here we use an agent-based simulation model to demonstrate that high selection pressure in the form of resource pressure promotes the accumulation of adaptive culture in spite of small population sizes and high innovation costs. We argue that the interaction of demography and selection is important, and that neither can be considered in isolation. We predict that an increase in cultural complexity is most likely to occur under conditions of population pressure relative to resource availability. Our model may help to explain why culture change can occur without major environmental change. We suggest that understanding the interaction between shifting selective pressures and demography is essential for explaining the evolution of cultural complexity.

  4. Cumulative sum quality control for calibrated breast density measurements

    International Nuclear Information System (INIS)

    Heine, John J.; Cao Ke; Beam, Craig

    2009-01-01

    Purpose: Breast density is a significant breast cancer risk factor. Although various methods are used to estimate breast density, there is no standard measurement for this important factor. The authors are developing a breast density standardization method for use in full field digital mammography (FFDM). The approach calibrates for interpatient acquisition technique differences. The calibration produces a normalized breast density pixel value scale. The method relies on first generating a baseline (BL) calibration dataset, which required extensive phantom imaging. Standardizing prospective mammograms with calibration data generated in the past could introduce unanticipated error in the standardized output if the calibration dataset is no longer valid. Methods: Sample points from the BL calibration dataset were imaged approximately biweekly over an extended timeframe. These serial samples were used to evaluate the BL dataset reproducibility and quantify the serial calibration accuracy. The cumulative sum (Cusum) quality control method was used to evaluate the serial sampling. Results: There is considerable drift in the serial sample points from the BL calibration dataset that is x-ray beam dependent. Systematic deviation from the BL dataset caused significant calibration errors. This system drift was not captured with routine system quality control measures. Cusum analysis indicated that the drift is a sign of system wear and eventual x-ray tube failure. Conclusions: The BL calibration dataset must be monitored and periodically updated, when necessary, to account for sustained system variations to maintain the calibration accuracy.

  5. Cumulative sum quality control for calibrated breast density measurements

    Energy Technology Data Exchange (ETDEWEB)

    Heine, John J.; Cao Ke; Beam, Craig [Cancer Prevention and Control Division, Moffitt Cancer Center, 12902 Magnolia Drive, Tampa, Florida 33612 (United States); Division of Epidemiology and Biostatistics, School of Public Health, University of Illinois at Chicago, 1603 W. Taylor St., Chicago, Illinois 60612 (United States)

    2009-12-15

    Purpose: Breast density is a significant breast cancer risk factor. Although various methods are used to estimate breast density, there is no standard measurement for this important factor. The authors are developing a breast density standardization method for use in full field digital mammography (FFDM). The approach calibrates for interpatient acquisition technique differences. The calibration produces a normalized breast density pixel value scale. The method relies on first generating a baseline (BL) calibration dataset, which required extensive phantom imaging. Standardizing prospective mammograms with calibration data generated in the past could introduce unanticipated error in the standardized output if the calibration dataset is no longer valid. Methods: Sample points from the BL calibration dataset were imaged approximately biweekly over an extended timeframe. These serial samples were used to evaluate the BL dataset reproducibility and quantify the serial calibration accuracy. The cumulative sum (Cusum) quality control method was used to evaluate the serial sampling. Results: There is considerable drift in the serial sample points from the BL calibration dataset that is x-ray beam dependent. Systematic deviation from the BL dataset caused significant calibration errors. This system drift was not captured with routine system quality control measures. Cusum analysis indicated that the drift is a sign of system wear and eventual x-ray tube failure. Conclusions: The BL calibration dataset must be monitored and periodically updated, when necessary, to account for sustained system variations to maintain the calibration accuracy.

  6. Nonimmunogenic hyperthyroidism: Cumulative hypothyroidism incidence after radioiodine and surgical treatment

    International Nuclear Information System (INIS)

    Kinser, J.A.; Roesler, H.; Furrer, T.; Gruetter, D.Z.; Zimmermann, H.

    1989-01-01

    During 1977, 246 hyperthyroid patients were seen in our departments, 140 (57%) with nonimmunogenic hyperthyroidism (NIH)--101 with a toxic adenoma (TA) and 39 with multifocal functional autonomy (MFA). All patients but one could be followed over 9 yr, 101 after 131I treatment (RIT), another 29 after surgery (S). Ten patients were left untreated. Thirty-four treated (24%) patients died, none as a result of thyroid or post-treatment complications. There was no hyperthyroidism later than 9 mo after therapy. Only 1% (RIT) and 24% (S) were hypothyroid 1 yr after treatment. But 19% of all treated NIH patients were hypothyroid after 9 yr or at the time of their death, 12% after RIT and 41% after S. The cumulative hypothyroidism incidences 1.4%/yr for RIT and 2.2%/yr for S, were not significantly different. Out of the five survivers without RIT or S, two TA patients were hypothyroid. The effect of RIT on goiter related loco-regional complications was not worse than after S. We conclude that RIT is the treatment for NIH, leaving surgery for exceptional cases

  7. Cumulative biological impacts of The Geysers geothermal development

    Energy Technology Data Exchange (ETDEWEB)

    Brownell, J.A.

    1981-10-01

    The cumulative nature of current and potential future biological impacts from full geothermal development in the steam-dominated portion of The Geysers-Calistoga KGRA are identified by the California Energy Commission staff. Vegetation, wildlife, and aquatic resources information have been reviewed and evaluated. Impacts and their significance are discussed and staff recommendations presented. Development of 3000 MW of electrical energy will result in direct vegetation losses of 2790 acres, based on an estimate of 11.5% loss per lease-hold of 0.93 acres/MW. If unmitigated, losses will be greater. Indirect vegetation losses and damage occur from steam emissions which contain elements (particularly boron) toxic to vegetation. Other potential impacts include chronic low-level boron exposure, acid rain, local climate modification, and mechanical damage. A potential exists for significant reduction and changes in wildlife from direct habitat loss and development influences. Highly erosive soils create the potential for significant reduction of aquatic resources, particularly game fish. Toxic spills have caused some temporary losses of aquatic species. Staff recommends monitoring and implementation of mitigation measures at all geothermal development stages.

  8. Estimation of Cumulative Absolute Velocity using Empirical Green's Function Method

    International Nuclear Information System (INIS)

    Park, Dong Hee; Yun, Kwan Hee; Chang, Chun Joong; Park, Se Moon

    2009-01-01

    In recognition of the needs to develop a new criterion for determining when the OBE (Operating Basis Earthquake) has been exceeded at nuclear power plants, Cumulative Absolute Velocity (CAV) was introduced by EPRI. The concept of CAV is the area accumulation with the values more than 0.025g occurred during every one second. The equation of the CAV is as follows. CAV = ∫ 0 max |a(t)|dt (1) t max = duration of record, a(t) = acceleration (>0.025g) Currently, the OBE exceedance criteria in Korea is Peak Ground Acceleration (PGA, PGA>0.1g). When Odesan earthquake (M L =4.8, January 20th, 2007) and Gyeongju earthquake (M L =3.4, June 2nd, 1999) were occurred, we have had already experiences of PGA greater than 0.1g that did not even cause any damage to the poorly-designed structures nearby. This moderate earthquake has motivated Korea to begin the use of the CAV for OBE exceedance criteria for NPPs. Because the present OBE level has proved itself to be a poor indicator for small-to-moderate earthquakes, for which the low OBE level can cause an inappropriate shut down the plant. A more serious possibility is that this scenario will become a reality at a very high level. Empirical Green's Function method was a simulation technique which can estimate the CAV value and it is hereby introduced

  9. Cumulative causation, market transition, and emigration from China.

    Science.gov (United States)

    Liang, Zai; Chunyu, Miao David; Zhuang, Guotu; Ye, Wenzhen

    2008-11-01

    This article reports findings from a recent survey of international migration from China's Fujian Province to the United States. Using the ethnosurvey approach developed in the Mexican Migration Project, the authors conducted surveys in migrant-sending communities in China as well as in destination communities in New York City. Hypotheses are derived from the international migration literature and the market transition debate. The results are generally consistent with hypotheses derived from cumulative causation of migration; however, geographical location creates some differences in migration patterns to the United States. In China as in Mexico, the existence of migration networks increases the propensity of migration for others in the community. In contrast to the Mexican case, among Chinese immigrants, having a previously migrated household member increases the propensity of other household members to migrate only after the debt for previous migration is paid off. In step with market transition theory, the authors also find that political power influences the migration experience from the coastal Fujian Province.

  10. Correlated stopping, proton clusters and higher order proton cumulants

    Energy Technology Data Exchange (ETDEWEB)

    Bzdak, Adam [AGH University of Science and Technology, Faculty of Physics and Applied Computer Science, Krakow (Poland); Koch, Volker [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Skokov, Vladimir [RIKEN/BNL, Brookhaven National Laboratory, Upton, NY (United States)

    2017-05-15

    We investigate possible effects of correlations between stopped nucleons on higher order proton cumulants at low energy heavy-ion collisions. We find that fluctuations of the number of wounded nucleons N{sub part} lead to rather nontrivial dependence of the correlations on the centrality; however, this effect is too small to explain the large and positive four-proton correlations found in the preliminary data collected by the STAR collaboration at √(s) = 7.7 GeV. We further demonstrate that, by taking into account additional proton clustering, we are able to qualitatively reproduce the preliminary experimental data. We speculate that this clustering may originate either from collective/multi-collision stopping which is expected to be effective at lower energies or from a possible first-order phase transition, or from (attractive) final state interactions. To test these ideas we propose to measure a mixed multi-particle correlation between stopped protons and a produced particle (e.g. pion, antiproton). (orig.)

  11. Decision making generalized by a cumulative probability weighting function

    Science.gov (United States)

    dos Santos, Lindomar Soares; Destefano, Natália; Martinez, Alexandre Souto

    2018-01-01

    Typical examples of intertemporal decision making involve situations in which individuals must choose between a smaller reward, but more immediate, and a larger one, delivered later. Analogously, probabilistic decision making involves choices between options whose consequences differ in relation to their probability of receiving. In Economics, the expected utility theory (EUT) and the discounted utility theory (DUT) are traditionally accepted normative models for describing, respectively, probabilistic and intertemporal decision making. A large number of experiments confirmed that the linearity assumed by the EUT does not explain some observed behaviors, as nonlinear preference, risk-seeking and loss aversion. That observation led to the development of new theoretical models, called non-expected utility theories (NEUT), which include a nonlinear transformation of the probability scale. An essential feature of the so-called preference function of these theories is that the probabilities are transformed by decision weights by means of a (cumulative) probability weighting function, w(p) . We obtain in this article a generalized function for the probabilistic discount process. This function has as particular cases mathematical forms already consecrated in the literature, including discount models that consider effects of psychophysical perception. We also propose a new generalized function for the functional form of w. The limiting cases of this function encompass some parametric forms already proposed in the literature. Far beyond a mere generalization, our function allows the interpretation of probabilistic decision making theories based on the assumption that individuals behave similarly in the face of probabilities and delays and is supported by phenomenological models.

  12. New tests of cumulative prospect theory and the priority heuristic

    Directory of Open Access Journals (Sweden)

    Michael H. Birnbaum

    2008-04-01

    Full Text Available Previous tests of cumulative prospect theory (CPT and of the priority heuristic (PH found evidence contradicting these two models of risky decision making. However, those tests were criticized because they had characteristics that might ``trigger'' use of other heuristics. This paper presents new tests that avoid those characteristics. Expected values of the gambles are nearly equal in each choice. In addition, if a person followed expected value (EV, expected utility (EU, CPT, or PH in these tests, she would shift her preferences in the same direction as shifts in EV or EU. In contrast, the transfer of attention exchange model (TAX and a similarity model predict that people will reverse preferences in the opposite direction. Results contradict the PH, even when PH is modified to include a preliminary similarity evaluation using the PH parameters. New tests of probability-consequence interaction were also conducted. Strong interactions were observed, contrary to PH. These results add to the growing bodies of evidence showing that neither CPT nor PH is an accurate description of risky decision making.

  13. INTERACTIVE VISUALIZATION OF PROBABILITY AND CUMULATIVE DENSITY FUNCTIONS

    KAUST Repository

    Potter, Kristin; Kirby, Robert Michael; Xiu, Dongbin; Johnson, Chris R.

    2012-01-01

    The probability density function (PDF), and its corresponding cumulative density function (CDF), provide direct statistical insight into the characterization of a random process or field. Typically displayed as a histogram, one can infer probabilities of the occurrence of particular events. When examining a field over some two-dimensional domain in which at each point a PDF of the function values is available, it is challenging to assess the global (stochastic) features present within the field. In this paper, we present a visualization system that allows the user to examine two-dimensional data sets in which PDF (or CDF) information is available at any position within the domain. The tool provides a contour display showing the normed difference between the PDFs and an ansatz PDF selected by the user and, furthermore, allows the user to interactively examine the PDF at any particular position. Canonical examples of the tool are provided to help guide the reader into the mapping of stochastic information to visual cues along with a description of the use of the tool for examining data generated from an uncertainty quantification exercise accomplished within the field of electrophysiology.

  14. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  15. Cumulative damage fraction design approach for LMFBR metallic fuel elements

    International Nuclear Information System (INIS)

    Johnson, D.L.; Einziger, R.E.; Huchman, G.D.

    1979-01-01

    The cumulative damage fraction (CDF) analytical technique is currently being used to analyze the performance of metallic fuel elements for proliferation-resistant LMFBRs. In this technique, the fraction of the total time to rupture of the cladding is calculated as a function of the thermal, stress, and neutronic history. Cladding breach or rupture is implied by CDF = 1. Cladding wastage, caused by interactions with both the fuel and sodium coolant, is assumed to uniformly thin the cladding wall. The irradiation experience of the EBR-II Mark-II driver fuel with solution-annealed Type 316 stainless steel cladding provides an excellent data base for testing the applicability of the CDF technique to metallic fuel. The advanced metal fuels being considered for use in LMFBRs are U-15-Pu-10Zr, Th-20Pu and Th-2OU (compositions are given in weight percent). The two cladding alloys being considered are Type 316 stainless steel and a titanium-stabilized Type 316 stainless steel. Both are in the cold-worked condition. The CDF technique was applied to these fuels and claddings under the assumed steady-state operating conditions

  16. Cumulative exposure to carbon monoxide during the day

    Energy Technology Data Exchange (ETDEWEB)

    Joumard, R. (INRETS, 69 - Bron (FR))

    The carbon monoxide, CO, has the advantage of being very easily and accurately measured under various conditions. In addition, it allows the translation of CO concentrations into their biological effects. The cumulative CO exposure should be considered according to current environment conditions during a given period of life, e.g. the day. In addition, the translation of concentrations and exposure times of CO fixed on blood haemoglobine (carboxyhaemoglobine) depends on physiological factors such as age, size, sex, or physical activity. This paper gives some examples of CO exposure translated into curves of carboxyhaemoglobine: case of 92 persons whose schedule was studied in details, of customs officers whose exposure was measured during one week, or other theoretical cases. In all the cases studied, smoking is by far the first factor of pollution by carbon monoxide. If not considering this case, the CO contents observed are preoccupying for sensitive subjects (in particular children) only in very rare cases. Furthermore, this approach allows the assessment of maximal allowable concentrations during specific exposures (work, e.g. in a tunnel) by integrating them into normal life conditions and population current exposure.

  17. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  18. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  19. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  20. Surfactant-induced skin irritation and skin repair: evaluation of a cumulative human irritation model by noninvasive techniques.

    Science.gov (United States)

    Wilhelm, K P; Freitag, G; Wolff, H H

    1994-12-01

    Although surfactant-induced acute irritant dermatitis has been extensively studied, our understanding about the induction and repair of the clinically more relevant chronic form is limited. Our purpose was to investigate qualitative and quantitative differences in surfactant-induced irritant skin reactions from cumulative exposure to structurally unrelated surfactants and to compare the maximal irritant responses from this model with corresponding reactions noted in a previously reported acute irritation model. Sodium lauryl sulfate (SLS), dodecyl trimethyl ammonium bromide (DTAB), and potassium soap were the model irritants. Surfactant solutions (7.5%) were applied for 20 minutes daily (for 8 consecutive days excluding the weekend) to the volar aspect of the forearm of 11 volunteers. Irritant reactions were repeatedly assessed until complete healing was indicated by visual assessment and by measurements of transepidermal water loss (TEWL), erythema (skin color reflectance), and stratum corneum hydration (electrical capacitance). Maximum irritant responses were compared with corresponding reactions from an acute irritation model. TEWL was increased by SLS and DTAB to the same extent, but erythema was significantly higher in DTAB-treated skin. Skin dryness, as demonstrated by decreased capacitance values and increased scores for scaling and fissuring, was significantly more pronounced than in an acute irritation model for SLS and DTAB, although no difference was detected between the two surfactants. Potassium soap led to a slight increase in TEWL, whereas the remaining features were not significantly changed. This chronic irritation model appears to represent the clinical situation of irritant contact dermatitis with pronounced skin dryness more closely than the acute irritation model. The present study confirms that an extended time is needed for complete healing of irritant skin reactions. We also demonstrated that the evaluation of the irritation potential of

  1. Peak power ratio generator

    Science.gov (United States)

    Moyer, R.D.

    A peak power ratio generator is described for measuring, in combination with a conventional power meter, the peak power level of extremely narrow pulses in the gigahertz radio frequency bands. The present invention in a preferred embodiment utilizes a tunnel diode and a back diode combination in a detector circuit as the only high speed elements. The high speed tunnel diode provides a bistable signal and serves as a memory device of the input pulses for the remaining, slower components. A hybrid digital and analog loop maintains the peak power level of a reference channel at a known amount. Thus, by measuring the average power levels of the reference signal and the source signal, the peak power level of the source signal can be determined.

  2. Modeling cumulative effects in life cycle assessment: the case of fertilizer in wheat production contributing to the global warming potential.

    Science.gov (United States)

    Laratte, Bertrand; Guillaume, Bertrand; Kim, Junbeum; Birregah, Babiga

    2014-05-15

    This paper aims at presenting a dynamic indicator for life cycle assessment (LCA) measuring cumulative impacts over time of greenhouse gas (GHG) emissions from fertilizers used for wheat cultivation and production. Our approach offers a dynamic indicator of global warming potential (GWP), one of the most used indicator of environmental impacts (e.g. in the Kyoto Protocol). For a case study, the wheat production in France was selected and considered by using data from official sources about fertilizer consumption and production of wheat. We propose to assess GWP environmental impact based on LCA method. The system boundary is limited to the fertilizer production for 1 ton of wheat produced (functional unit) from 1910 to 2010. As applied to wheat production in France, traditional LCA shows a maximum GWP impact of 500 kg CO2-eq for 1 ton of wheat production, whereas the GWP impact of wheat production over time with our approach to dynamic LCA and its cumulative effects increases to 18,000 kg CO2-eq for 1 ton of wheat production. In this paper, only one substance and one impact assessment indicator are presented. However, the methodology can be generalized and improved by using different substances and indicators. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures

    Science.gov (United States)

    Atar, Burcu; Kamata, Akihito

    2011-01-01

    The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…

  4. The Liquidity Coverage Ratio: the need for further complementary ratios?

    OpenAIRE

    Ojo, Marianne

    2013-01-01

    This paper considers components of the Liquidity Coverage Ratio – as well as certain prevailing gaps which may necessitate the introduction of a complementary liquidity ratio. The definitions and objectives accorded to the Liquidity Coverage Ratio (LCR) and Net Stable Funding Ratio (NSFR) highlight the focus which is accorded to time horizons for funding bank operations. A ratio which would focus on the rate of liquidity transformations and which could also serve as a complementary metric gi...

  5. Hip pain onset in relation to cumulative workplace and leisure time mechanical load: a population based case-control study.

    Science.gov (United States)

    Pope, D P; Hunt, I M; Birrell, F N; Silman, A J; Macfarlane, G J

    2003-04-01

    In an unselected community sample of adults, to assess the role and importance of exposure to mechanical factors both at work and leisure in the aetiology of hip pain. A population based prevalence case-control study. Cases and controls were identified from a population survey of 3847 subjects registered with two general practices in Cheshire, United Kingdom. All subjects received a postal questionnaire which inquired about hip pain during the past month. An occupational history was obtained, including exposure to each of seven physical demands. Information was also obtained on history of participation in eight common sporting activities. 88% of those invited to participate returned a completed questionnaire. The 352 subjects with hip pain were designated as cases, and the remaining 3002 subjects as controls. In people ever employed, hip pain was significantly associated with high cumulative workplace exposure (before onset) of walking long distances over rough ground, lifting/moving heavy weights, sitting for prolonged periods, walking long distances, frequent jumping between different levels, and standing for prolonged periods. Odds ratios (ORs) in the higher exposure categories ranged from 1.46 to 2.65. Cumulative exposure to three sporting activities was significantly associated with hip pain: track and field sports, jogging, and walking, with odds ratios varying between 1.57 to 1.94. On multivariate analysis three factors were independent predictors of hip pain onset: cumulative exposure of sitting for prolonged periods (higher exposure v not exposed: OR=1.82, 95% confidence interval (CI) 1.13 to 2.92), lifting weights >50 lb (23 kg) (OR=1.74, 95% CI 1.06 to 2.86) (both relating to the workplace), and walking as a leisure activity (OR=1.97, 95% CI 1.32 to 2.94). The population attributable risk associated with each of these activities was 21%, 13%, and 16%, respectively Cumulative exposure to some workplace and sporting "mechanical" risk factors for hip

  6. Toward computational cumulative biology by combining models of biological datasets.

    Science.gov (United States)

    Faisal, Ali; Peltonen, Jaakko; Georgii, Elisabeth; Rung, Johan; Kaski, Samuel

    2014-01-01

    A main challenge of data-driven sciences is how to make maximal use of the progressively expanding databases of experimental datasets in order to keep research cumulative. We introduce the idea of a modeling-based dataset retrieval engine designed for relating a researcher's experimental dataset to earlier work in the field. The search is (i) data-driven to enable new findings, going beyond the state of the art of keyword searches in annotations, (ii) modeling-driven, to include both biological knowledge and insights learned from data, and (iii) scalable, as it is accomplished without building one unified grand model of all data. Assuming each dataset has been modeled beforehand, by the researchers or automatically by database managers, we apply a rapidly computable and optimizable combination model to decompose a new dataset into contributions from earlier relevant models. By using the data-driven decomposition, we identify a network of interrelated datasets from a large annotated human gene expression atlas. While tissue type and disease were major driving forces for determining relevant datasets, the found relationships were richer, and the model-based search was more accurate than the keyword search; moreover, it recovered biologically meaningful relationships that are not straightforwardly visible from annotations-for instance, between cells in different developmental stages such as thymocytes and T-cells. Data-driven links and citations matched to a large extent; the data-driven links even uncovered corrections to the publication data, as two of the most linked datasets were not highly cited and turned out to have wrong publication entries in the database.

  7. Cumulative childhood stress and autoimmune diseases in adults.

    Science.gov (United States)

    Dube, Shanta R; Fairweather, DeLisa; Pearson, William S; Felitti, Vincent J; Anda, Robert F; Croft, Janet B

    2009-02-01

    To examine whether childhood traumatic stress increased the risk of developing autoimmune diseases as an adult. Retrospective cohort study of 15,357 adult health maintenance organization members enrolled in the Adverse Childhood Experiences (ACEs) Study from 1995 to 1997 in San Diego, California, and eligible for follow-up through 2005. ACEs included childhood physical, emotional, or sexual abuse; witnessing domestic violence; growing up with household substance abuse, mental illness, parental divorce, and/or an incarcerated household member. The total number of ACEs (ACE Score range = 0-8) was used as a measure of cumulative childhood stress. The outcome was hospitalizations for any of 21 selected autoimmune diseases and 4 immunopathology groupings: T- helper 1 (Th1) (e.g., idiopathic myocarditis); T-helper 2 (Th2) (e.g., myasthenia gravis); Th2 rheumatic (e.g., rheumatoid arthritis); and mixed Th1/Th2 (e.g., autoimmune hemolytic anemia). Sixty-four percent reported at least one ACE. The event rate (per 10,000 person-years) for a first hospitalization with any autoimmune disease was 31.4 in women and 34.4 in men. First hospitalizations for any autoimmune disease increased with increasing number of ACEs (p or=2 ACEs were at a 70% increased risk for hospitalizations with Th1, 80% increased risk for Th2, and 100% increased risk for rheumatic diseases (p Childhood traumatic stress increased the likelihood of hospitalization with a diagnosed autoimmune disease decades into adulthood. These findings are consistent with recent biological studies on the impact of early life stress on subsequent inflammatory responses.

  8. Integrating environmental monitoring with cumulative effects management and decision making.

    Science.gov (United States)

    Cronmiller, Joshua G; Noble, Bram F

    2018-05-01

    Cumulative effects (CE) monitoring is foundational to emerging regional and watershed CE management frameworks, yet monitoring is often poorly integrated with CE management and decision-making processes. The challenges are largely institutional and organizational, more so than scientific or technical. Calls for improved integration of monitoring with CE management and decision making are not new, but there has been limited research on how best to integrate environmental monitoring programs to ensure credible CE science and to deliver results that respond to the more immediate questions and needs of regulatory decision makers. This paper examines options for the integration of environmental monitoring with CE frameworks. Based on semistructured interviews with practitioners, regulators, and other experts in the Lower Athabasca, Alberta, Canada, 3 approaches to monitoring system design are presented. First, a distributed monitoring system, reflecting the current approach in the Lower Athabasca, where monitoring is delegated to different external programs and organizations; second, a 1-window system in which monitoring is undertaken by a single, in-house agency for the purpose of informing management and regulatory decision making; third, an independent system driven primarily by CE science and understanding causal relationships, with knowledge adopted for decision support where relevant to specific management questions. The strengths and limitations of each approach are presented. A hybrid approach may be optimal-an independent, nongovernment, 1-window model for CE science, monitoring, and information delivery-capitalizing on the strengths of distributed, 1-window, and independent monitoring systems while mitigating their weaknesses. If governments are committed to solving CE problems, they must invest in the long-term science needed to do so; at the same time, if science-based monitoring programs are to be sustainable over the long term, they must be responsive to

  9. Cumulative Effects of Barriers on the Movements of Forest Birds

    Directory of Open Access Journals (Sweden)

    Marc Bélisle

    2002-01-01

    Full Text Available Although there is a consensus of opinion that habitat fragmentation has deleterious effects on animal populations, primarily by inhibiting dispersal among remaining patches, there have been few explicit demonstrations of the ways by which degraded habitats actually constrain individual movement. Two impediments are primarily responsible for this paucity: it is difficult to separate the effects of habitat fragmentation (configuration from habitat loss (composition, and conventional measures of fragmented habitats are assumed to be, but probably are not, isotropic. We addressed these limitations by standardizing differences in forest cover in a clearly anisotropic configuration of habitat fragmentation by conducting a homing experiment with three species of forest birds in the Bow Valley of Banff National Park, Canada. Birds were translocated (1.2-3.5  km either parallel or perpendicular to four/five parallel barriers that are assumed to impede the cross-valley travel of forest-dependent animals. Taken together, individuals exhibited longer return times when they were translocated across these barriers, but differences among species suggest a more complex interpretation. A long-distance migrant (Yellow-rumped Warbler, Dendroica coronata behaved as predicted, but a short-distance migrant (Golden-crowned Kinglet, Regulus satrapa was indifferent to barrier configuration. A resident (Red-breasted Nuthatch, Sitta canadensis exhibited longer return times when it was translocated parallel to the barriers. Our results suggest that an anisotropic arrangement of small, open areas in fragmented landscapes can have a cumulative barrier effect on the movement of forest animals, but that both modelers and managers will have to acknowledge potentially counterintuitive differences among species to predict the effect that these may have on individual movement and, ultimately, dispersal.

  10. Measuring a fair and ambitious climate agreement using cumulative emissions

    International Nuclear Information System (INIS)

    Peters, Glen P; Andrew, Robbie M; Solomon, Susan; Friedlingstein, Pierre

    2015-01-01

    Policy makers have called for a ‘fair and ambitious’ global climate agreement. Scientific constraints, such as the allowable carbon emissions to avoid exceeding a 2 °C global warming limit with 66% probability, can help define ambitious approaches to climate targets. However, fairly sharing the mitigation challenge to meet a global target involves human values rather than just scientific facts. We develop a framework based on cumulative emissions of carbon dioxide to compare the consistency of countries’ current emission pledges to the ambition of keeping global temperatures below 2 °C, and, further, compare two alternative methods of sharing the remaining emission allowance. We focus on the recent pledges and other official statements of the EU, USA, and China. The EU and US pledges are close to a 2 °C level of ambition only if the remaining emission allowance is distributed based on current emission shares, which is unlikely to be viewed as ‘fair and ambitious’ by others who presently emit less. China’s stated emissions target also differs from measures of global fairness, owing to emissions that continue to grow into the 2020s. We find that, combined, the EU, US, and Chinese pledges leave little room for other countries to emit CO 2 if a 2 °C limit is the objective, essentially requiring all other countries to move towards per capita emissions 7 to 14 times lower than the EU, USA, or China by 2030. We argue that a fair and ambitious agreement for a 2 °C limit that would be globally inclusive and effective in the long term will require stronger mitigation than the goals currently proposed. Given such necessary and unprecedented mitigation and the current lack of availability of some key technologies, we suggest a new diplomatic effort directed at ensuring that the necessary technologies become available in the near future. (letter)

  11. Downstream cumulative effects of land use on freshwater communities

    Science.gov (United States)

    Kuglerová, L.; Kielstra, B. W.; Moore, D.; Richardson, J. S.

    2015-12-01

    Many streams and rivers are subject to disturbance from intense land use such as urbanization and agriculture, and this is especially obvious for small headwaters. Streams are spatially organized into networks where headwaters represent the tributaries and provide water, nutrients, and organic material to the main stems. Therefore perturbations within the headwaters might be cumulatively carried on downstream. Although we know that the disturbance of headwaters in urban and agricultural landscapes poses threats to downstream river reaches, the magnitude and severity of these changes for ecological communities is less known. We studied stream networks along a gradient of disturbance connected to land use intensity, from urbanized watersheds to watersheds placed in agricultural settings in the Greater Toronto Area. Further, we compared the patterns and processes found in the modified watershed to a control watershed, situated in a forested, less impacted landscape. Preliminary results suggest that hydrological modifications (flash floods), habitat loss (drainage and sewer systems), and water quality issues of small streams in urbanized and agricultural watersheds represent major disturbances and threats for aquatic and riparian biota on local as well as larger spatial scales. For example, communities of riparian plants are dominated by species typical of the land use on adjacent uplands as well as the dominant land use on the upstream contributing area, instead of riparian obligates commonly found in forested watersheds. Further, riparian communities in disturbed environments are dominated by invasive species. The changes in riparian communities are vital for various functions of riparian vegetation. Bank erosion control is suppressed, leading to severe channel transformations and sediment loadings in urbanized watersheds. Food sources for instream biota and thermal regimes are also changed, which further triggers alterations of in-stream biological communities

  12. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  13. Maximum power operation of interacting molecular motors

    DEFF Research Database (Denmark)

    Golubeva, Natalia; Imparato, Alberto

    2013-01-01

    , as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...

  14. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  15. On the maximum drawdown during speculative bubbles

    Science.gov (United States)

    Rotundo, Giulia; Navarra, Mauro

    2007-08-01

    A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.

  16. Conductivity maximum in a charged colloidal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Bastea, S

    2009-01-27

    Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.

  17. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  18. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  19. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  20. Multiperiod Maximum Loss is time unit invariant.

    Science.gov (United States)

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.

  1. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  2. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  3. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  4. Cumulative IT Use Is Associated with Psychosocial Stress Factors and Musculoskeletal Symptoms

    Directory of Open Access Journals (Sweden)

    Billy C. L. So

    2017-12-01

    Full Text Available This study aimed to examine the relationship between cumulative use of electronic devices and musculoskeletal symptoms. Smartphones and tablet computers are very popular and people may own or operate several devices at the same time. High prevalence rates of musculoskeletal symptoms associated with intensive computer use have been reported. However, research focusing on mobile devices is only just emerging in recent years. In this study, 285 persons participated including 140 males and 145 females (age range 18–50. The survey consisted of self-reported estimation of daily information technology (IT exposure hours, tasks performed, psychosocial stress factors and relationship to musculoskeletal discomfort in the past 12 months. Total IT exposure time was an average of 7.38 h (±5.2 per day. The psychosocial factor of “working through pain” showed the most significant association with odds ratio (OR ranging from 1.078 (95% CI = 1.021–1.138 for elbow discomfort, to 1.111 (95% CI = 1.046–1.180 for shoulder discomfort. Desktop time was also significantly associated with wrist/hand discomfort (OR = 1.103. These findings indicate only a modest relationship but one that is statistically significant with accounting for confounders. It is anticipated that prevalence rates of musculoskeletal disorders would rise in the future with increasing contribution due to psychosocial stress factors.

  5. Cumulative IT Use Is Associated with Psychosocial Stress Factors and Musculoskeletal Symptoms.

    Science.gov (United States)

    So, Billy C L; Cheng, Andy S K; Szeto, Grace P Y

    2017-12-08

    This study aimed to examine the relationship between cumulative use of electronic devices and musculoskeletal symptoms. Smartphones and tablet computers are very popular and people may own or operate several devices at the same time. High prevalence rates of musculoskeletal symptoms associated with intensive computer use have been reported. However, research focusing on mobile devices is only just emerging in recent years. In this study, 285 persons participated including 140 males and 145 females (age range 18-50). The survey consisted of self-reported estimation of daily information technology (IT) exposure hours, tasks performed, psychosocial stress factors and relationship to musculoskeletal discomfort in the past 12 months. Total IT exposure time was an average of 7.38 h (±5.2) per day. The psychosocial factor of "working through pain" showed the most significant association with odds ratio (OR) ranging from 1.078 (95% CI = 1.021-1.138) for elbow discomfort, to 1.111 (95% CI = 1.046-1.180) for shoulder discomfort. Desktop time was also significantly associated with wrist/hand discomfort (OR = 1.103). These findings indicate only a modest relationship but one that is statistically significant with accounting for confounders. It is anticipated that prevalence rates of musculoskeletal disorders would rise in the future with increasing contribution due to psychosocial stress factors.

  6. Family Income, Cumulative Risk Exposure, and White Matter Structure in Middle Childhood

    Directory of Open Access Journals (Sweden)

    Alexander J. Dufford

    2017-11-01

    Full Text Available Family income is associated with gray matter morphometry in children, but little is known about the relationship between family income and white matter structure. In this paper, using Tract-Based Spatial Statistics, a whole brain, voxel-wise approach, we examined the relationship between family income (assessed by income-to-needs ratio and white matter organization in middle childhood (N = 27, M = 8.66 years. Results from a non-parametric, voxel-wise, multiple regression (threshold-free cluster enhancement, p < 0.05 FWE corrected indicated that lower family income was associated with lower white matter organization [assessed by fractional anisotropy (FA] for several clusters in white matter tracts involved in cognitive and emotional functions including fronto-limbic circuitry (uncinate fasciculus and cingulum bundle, association fibers (inferior longitudinal fasciculus, superior longitudinal fasciculus, and corticospinal tracts. Further, we examined the possibility that cumulative risk (CR exposure might function as one of the potential pathways by which family income influences neural outcomes. Using multiple regressions, we found lower FA in portions of these tracts, including those found in the left cingulum bundle and left superior longitudinal fasciculus, was significantly related to greater exposure to CR (β = -0.47, p < 0.05 and β = -0.45, p < 0.05.

  7. Education, income and ethnic differences in cumulative biological risk profiles in a national sample of US adults: NHANES III (1988-1994).

    Science.gov (United States)

    Seeman, Teresa; Merkin, Sharon S; Crimmins, Eileen; Koretz, Brandon; Charette, Susan; Karlamangla, Arun

    2008-01-01

    Data from the nationally representative US National Health and Nutrition Examination Survey (NHANES) III cohort were used to examine the hypothesis that socio-economic status is consistently and negatively associated with levels of biological risk, as measured by nine biological parameters known to predict health risks (diastolic and systolic blood pressure, pulse, HDL and total cholesterol, glycosylated hemoglobin, c-reactive protein, albumin and waist-hip ratio), resulting in greater cumulative burdens of biological risk among those of lower education and/or income. As hypothesized, consistent education and income gradients were seen for biological parameters reflecting cardiovascular, metabolic and inflammatory risk: those with lower education and income exhibiting greater prevalence of high-risk values for each of nine individual biological risk factors. Significant education and income gradients were also seen for summary indices reflecting cumulative burdens of cardiovascular, metabolic and inflammatory risks as well as overall total biological risks. Multivariable cumulative logistic regression models revealed that the education and income effects were each independently and negatively associated with cumulative biological risks, and that these effects remained significant independent of age, gender, ethnicity and lifestyle factors such as smoking and physical activity. There were no significant ethnic differences in the patterns of association between socio-economic status and biological risks, but older age was associated with significantly weaker education and income gradients.

  8. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm f...... for this problem has O(n^2) running time. We show how to obtain O(n^{3/2}*(log n)^3) expected running time. We also show that if G has bounded treewidth, its maximum detour can be computed in O(n*(log n)^3) expected time....

  9. Cumulative effects of planned industrial development and climate change on marine ecosystems

    Directory of Open Access Journals (Sweden)

    Cathryn Clarke Murray

    2015-07-01

    Full Text Available With increasing human population, large scale climate changes, and the interaction of multiple stressors, understanding cumulative effects on marine ecosystems is increasingly important. Two major drivers of change in coastal and marine ecosystems are industrial developments with acute impacts on local ecosystems, and global climate change stressors with widespread impacts. We conducted a cumulative effects mapping analysis of the marine waters of British Columbia, Canada, under different scenarios: climate change and planned developments. At the coast-wide scale, climate change drove the largest change in cumulative effects with both widespread impacts and high vulnerability scores. Where the impacts of planned developments occur, planned industrial and pipeline activities had high cumulative effects, but the footprint of these effects was comparatively localized. Nearshore habitats were at greatest risk from planned industrial and pipeline activities; in particular, the impacts of planned pipelines on rocky intertidal habitats were predicted to cause the highest change in cumulative effects. This method of incorporating planned industrial development in cumulative effects mapping allows explicit comparison of different scenarios with the potential to be used in environmental impact assessments at various scales. Its use allows resource managers to consider cumulative effect hotspots when making decisions regarding industrial developments and avoid unacceptable cumulative effects. Management needs to consider both global and local stressors in managing marine ecosystems for the protection of biodiversity and the provisioning of ecosystem services.

  10. Cumulative effective dose associated with radiography and CT of adolescents with spinal injuries.

    Science.gov (United States)

    Lemburg, Stefan P; Peters, Soeren A; Roggenland, Daniela; Nicolas, Volkmar; Heyer, Christoph M

    2010-12-01

    The purpose of this study was to analyze the quantity and distribution of cumulative effective doses in diagnostic imaging of adolescents with spinal injuries. At a level 1 trauma center from July 2003 through June 2009, imaging procedures during initial evaluation and hospitalization and after discharge of all patients 10-20 years old with spinal fractures were retrospectively analyzed. The cumulative effective doses for all imaging studies were calculated, and the doses to patients with spinal injuries who had multiple traumatic injuries were compared with the doses to patients with spinal injuries but without multiple injuries. The significance level was set at 5%. Imaging studies of 72 patients (32 with multiple injuries; average age, 17.5 years) entailed a median cumulative effective dose of 18.89 mSv. Patients with multiple injuries had a significantly higher total cumulative effective dose (29.70 versus 10.86 mSv, p cumulative effective dose to multiple injury patients during the initial evaluation (18.39 versus 2.83 mSv, p cumulative effective dose. Adolescents with spinal injuries receive a cumulative effective dose equal to that of adult trauma patients and nearly three times that of pediatric trauma patients. Areas of focus in lowering cumulative effective dose should be appropriate initial estimation of trauma severity and careful selection of CT scan parameters.

  11. Turning stumbling blocks into stepping stones in the analysis of cumulative impacts

    Science.gov (United States)

    Leslie M. Reid

    2004-01-01

    Federal and state legislation, such as the National Environmental Policy Act and the California Environmental Quality Act, require that responsible agency staff consider the cumulative impacts of proposed activities before permits are issued for certain kinds of public or private projects. The Council on Environmental Quality (CEQ 1997) defined a cumulative impact as...

  12. 14 CFR Section 18 - Objective Classification-Cumulative Effect of Changes in Accounting Principles

    Science.gov (United States)

    2010-01-01

    ... of Changes in Accounting Principles Section 18 Section 18 Aeronautics and Space OFFICE OF THE... Objective Classification—Cumulative Effect of Changes in Accounting Principles 98Cumulative Effect of Changes in Accounting Principles. Record here the difference between the amount of retained earnings at...

  13. The Scarring Effects of Bankruptcy: Cumulative Disadvantage across Credit and Labor Markets

    Science.gov (United States)

    Maroto, Michelle

    2012-01-01

    As the recent economic crisis has demonstrated, inequality often spans credit and labor markets, supporting a system of cumulative disadvantage. Using data from the National Longitudinal Survey of Youth, this research draws on stigma, cumulative disadvantage and status characteristics theories to examine whether credit and labor markets intersect…

  14. Mapping cumulative environmental risks: examples from the EU NoMiracle project

    NARCIS (Netherlands)

    Pistocchi, A.; Groenwold, J.; Lahr, J.; Loos, M.; Mujica, M.; Ragas, A.M.J.; Rallo, R.; Sala, S.; Schlink, U.; Strebel, K.; Vighi, M.; Vizcaino, P.

    2011-01-01

    We present examples of cumulative chemical risk mapping methods developed within the NoMiracle project. The different examples illustrate the application of the concentration addition (CA) approach to pesticides at different scale, the integration in space of cumulative risks to individual organisms

  15. 78 FR 25440 - Request for Information and Citations on Methods for Cumulative Risk Assessment

    Science.gov (United States)

    2013-05-01

    ... Citations on Methods for Cumulative Risk Assessment AGENCY: Office of the Science Advisor, Environmental... influence exposures, dose-response or risk/hazard posed by environmental contaminant exposures, and methods... who wish to receive further information about submitting information on methods for cumulative risk...

  16. Radiologic imaging in cystic fibrosis: cumulative effective dose and changing trends over 2 decades.

    LENUS (Irish Health Repository)

    O'Connell, Oisin J

    2012-06-01

    With the increasing life expectancy for patients with cystic fibrosis (CF), and a known predisposition to certain cancers, cumulative radiation exposure from radiologic imaging is of increasing significance. This study explores the estimated cumulative effective radiation dose over a 17-year period from radiologic procedures and changing trends of imaging modalities over this period.

  17. 30 CFR 250.921 - How do I analyze my platform for cumulative fatigue?

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false How do I analyze my platform for cumulative fatigue? 250.921 Section 250.921 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR... fatigue? (a) If you are required to analyze cumulative fatigue on your platform because of the results of...

  18. A Review of Non-Chemical Stressors and Their Importance in Cumulative Risk Assessment

    Science.gov (United States)

    Cumulative exposure/risk assessments need to include non-chemical stressors as well as human activities and chemical data. Multiple stressor research can offer information on the interactions between chemical and non-chemical stressors needed for cumulative risk assessment resea...

  19. Ten-Year Cumulative Author Index Volume 2001, 36(1) through 2010, 45(4)

    Science.gov (United States)

    Zucker, Stanley H.; Hassert, Silva

    2011-01-01

    This cumulative author index was developed as a service for the readership of Education and Training in Autism and Developmental Disabilities. It was prepared as a resource for scholars wishing to access the 391 articles published in volumes 36-45 of this journal. It also serves as a timely supplement to the 25-year (1966-1990) cumulative author…

  20. TREND: a program using cumulative sum methods to detect long-term trends in data

    International Nuclear Information System (INIS)

    Cranston, R.J.; Dunbar, R.M.; Jarvis, R.G.

    1976-01-01

    TREND is a computer program, in FORTRAN, to investigate data for long-term trends that are masked by short-term statistical fluctuations. To do this, it calculates and plots the cumulative sum of deviations from a chosen mean. As a further aid to diagnosis, the procedure can be repeated with a summation of the cumulative sum itself. (author)

  1. Energy Profit Ratio Compared

    International Nuclear Information System (INIS)

    Amano, Osamu

    2007-01-01

    We need more oil energy to take out oil under the ground. Limit resources make us consider other candidates of energy source instead of oil. Electricity shall be the main role more and more like electric vehicles and air conditioners so we should consider electricity generation ways. When we consider what kind of electric power generation is the best or suitable, we should not only power generation plant but whole process from mining to power generation. It is good way to use EPR, Energy Profit Ratio, to analysis which type is more efficient and which part is to do research and development when you see the input breakdown analysis. Electricity by the light water nuclear power plant, the hydrogen power plant and the geothermal power plant are better candidates from EPR analysis. Forecasting the world primly energy supply in 2050, it is said that the demand will be double of the demand in 2000 and the supply will not be able to satisfy the demand in 2050. We should save 30% of the demand and increase nuclear power plants 3.5 times more and recyclable energy like hydropower plants 3 times more. When the nuclear power plants are 3.5 times more then uranium peak will come and we will need breed uranium. I will analysis the EPR of FBR. Conclusion: A) the EPR of NPS in Japan is 17.4 and it is the best of all. B) Many countries will introduce new nuclear power plants rapidly may be 3.5 times in 2050. C) Uranium peak will happen around 2050. (author)

  2. Objective Bayesianism and the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Jon Williamson

    2013-09-01

    Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

  3. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  4. Cumulative high doses of inhaled formoterol have less systemic effects in asthmatic children 6-11 years-old than cumulative high doses of inhaled terbutaline.

    Science.gov (United States)

    Kaae, Rikke; Agertoft, Lone; Pedersen, Sören; Nordvall, S Lennart; Pedroletti, Christophe; Bengtsson, Thomas; Johannes-Hellberg, Ingegerd; Rosenborg, Johan

    2004-10-01

    To evaluate high dose tolerability and relative systemic dose potency between inhaled clinically equipotent dose increments of formoterol and terbutaline in children. Twenty boys and girls (6-11 years-old) with asthma and normal ECGs were studied. Ten doses of formoterol (Oxis) 4.5 microg (F4.5) or terbutaline (Bricanyl) 500 microg (T500) were inhaled cumulatively via a dry powder inhaler (Turbuhaler) over 1 h (three patients) or 2.5 h (17 patients) and compared to a day of no treatment, in a randomised, double-blind (active treatments only), crossover trial. Blood pressure (BP), ECG, plasma potassium, glucose, lactate, and adverse events were monitored up to 10 h to assess tolerability and relative systemic dose potency. Formoterol and terbutaline had significant beta2-adrenergic effects on most outcomes. Apart from the effect on systolic BP, QRS duration and PR interval, the systemic effects were significantly more pronounced with terbutaline than with formoterol. Thus, mean minimum plasma potassium, was suppressed from 3.56 (95% confidence interval, CI: 3.48-3.65) mmol l(-1) on the day of no treatment to 2.98 (CI: 2.90-3.08) after 10 x F4.5 and 2.70 (CI: 2.61-2.78) mmol l(-1) after 10 x T500, and maximum Q-Tc (heart rate corrected Q-T interval [Bazett's formula]) was prolonged from 429 (CI: 422-435) ms on the day of no treatment, to 455 (CI: 448-462) ms after 10 x F4.5 and 470 (CI: 463-476) ms after 10 x T500. Estimates of relative dose potency indicated that F4.5 microg had the same systemic activity as the clinically less effective dose of 250 microg terbutaline. The duration of systemic effects differed marginally between treatments. Spontaneously reported adverse events (most frequently tremor) were fewer with formoterol (78% of the children) than with terbutaline (95%). A serious adverse event occurred after inhalation of 45 microg formoterol over the 1 h dosing time, that prompted the extension of dosing time to 2.5 h. Multiple inhalations over 2.5 h of

  5. Cumulative high doses of inhaled formoterol have less systemic effects in asthmatic children 6–11 years-old than cumulative high doses of inhaled terbutaline

    Science.gov (United States)

    Kaae, Rikke; Agertoft, Lone; Pedersen, Sören; Nordvall, S Lennart; Pedroletti, Christophe; Bengtsson, Thomas; Johannes-Hellberg, Ingegerd; Rosenborg, Johan

    2004-01-01

    Objectives To evaluate high dose tolerability and relative systemic dose potency between inhaled clinically equipotent dose increments of formoterol and terbutaline in children. Methods Twenty boys and girls (6–11 years-old) with asthma and normal ECGs were studied. Ten doses of formoterol (Oxis®) 4.5 µg (F4.5) or terbutaline (Bricanyl®) 500 µg (T500) were inhaled cumulatively via a dry powder inhaler (Turbuhaler®) over 1 h (three patients) or 2.5 h (17 patients) and compared to a day of no treatment, in a randomised, double-blind (active treatments only), crossover trial. Blood pressure (BP), ECG, plasma potassium, glucose, lactate, and adverse events were monitored up to 10 h to assess tolerability and relative systemic dose potency. Results Formoterol and terbutaline had significant β2-adrenergic effects on most outcomes. Apart from the effect on systolic BP, QRS duration and PR interval, the systemic effects were significantly more pronounced with terbutaline than with formoterol. Thus, mean minimum plasma potassium, was suppressed from 3.56 (95% confidence interval, CI: 3.48–3.65) mmol l−1 on the day of no treatment to 2.98 (CI: 2.90–3.08) after 10 × F4.5 and 2.70 (CI: 2.61–2.78) mmol l−1 after 10 × T500, and maximum Q-Tc (heart rate corrected Q-T interval [Bazett's formula]) was prolonged from 429 (CI: 422–435) ms on the day of no treatment, to 455 (CI: 448–462) ms after 10 × F4.5 and 470 (CI: 463–476) ms after 10 × T500. Estimates of relative dose potency indicated that F4.5 µg had the same systemic activity as the clinically less effective dose of 250 µg terbutaline. The duration of systemic effects differed marginally between treatments. Spontaneously reported adverse events (most frequently tremor) were fewer with formoterol (78% of the children) than with terbutaline (95%). A serious adverse event occurred after inhalation of 45 µg formoterol over the 1 h dosing time, that prompted the extension of dosing time to 2.5 h

  6. Clinopyroxenite dikes crosscutting banded peridotites just above the metamorphic sole in the Oman ophiolite: early cumulates from the primary V3 lava

    Science.gov (United States)

    Ishimaru, Satoko; Arai, Shoji; Tamura, Akihiro

    2013-04-01

    Oman ophiolite is one of the well-known ophiolites for excellent exposures not only of the mantle section but also of the crustal section including effusive rocks and the underlying metamorphic rocks. In the Oman ophiolite, three types of effusive rocks (V1, V2 and V3 from the lower sequences) are recognized: i.e., V1, MORB-like magma, V2, island-arc type lava, and V3, intra-plate lava (Godard et al., 2003 and references there in). V1 and V2 lavas are dominant (> 95 %) as effusive rocks and have been observed in almost all the blocks of northern part of the Oman ophiolite (Godard et al., 2003), but V3 lava has been reported only from Salahi area (Alabaster et al., 1982). It is clear that there was a time gap of lava eruption between V1-2 and V3 based on the presence of pelagic sediments in between (Godard et al., 2003). In addition, V3 lavas are fed by a series of doleritic dikes crosscutting V2 lava (Alley unit) (Alabaster et al., 1982). We found clinopyroxenite (CPXITE) dikes crosscutting deformation structure of basal peridotites just above the metamorphic sole in Wadi Ash Shiyah. The sole metamorphic rock is garnet amphibolite, which overlies the banded and deformed harzburgite and dunite. The CPXITE is composed of coarse clinopyroxene (CPX) with minor amount of chlorite, garnet (hydrous/anhydrous grossular-andradite) with inclusions of titanite, and serpentine formed at a later low-temperature stage. The width of the CPXITE dikes is 2-5 cm (10 cm at maximum) and the dikes contain small blocks of wall harzburgite. Almost all the silicates are serpentinized in the harzburgite blocks except for some CPX. The Mg# (= Mg/(Mg + Fe) atomic ratio) of the CPX is almost constant (= 0.94-0.95) in the serpentinite blocks but varies within the dikes, highest at the contact with the block (0.94) and decreasing with the distance from the contact to 0.81 (0.85 on average). The contents of Al2O3, Cr2O3, and TiO2 in the CPX of the dikes are 0.5-2.0, 0.2-0.6, and 0

  7. Maximum permissible concentration (MPC) values for spontaneously fissioning radionuclides

    International Nuclear Information System (INIS)

    Ford, M.R.; Snyder, W.S.; Dillman, L.T.; Watson, S.B.

    1976-01-01

    The radiation hazards involved in handling certain of the transuranic nuclides that exhibit spontaneous fission as a mode of decay were reaccessed using recent advances in dosimetry and metabolic modeling. Maximum permissible concentration (MPC) values in air and water for occupational exposure (168 hr/week) were calculated for 244 Pu, 246 Cm, 248 Cm, 250 Cf, 252 Cf, 254 Cf, /sup 254m/Es, 255 Es, 254 Fm, and 256 Fm. The half-lives, branching ratios, and principal modes of decay of the parent-daughter members down to a member that makes a negligible contribution to the dose are given, and all daughters that make a significant contribution to the dose to body organs following inhalation or ingestion are included in the calculations. Dose commitments for body organs are also given

  8. Linear Time Local Approximation Algorithm for Maximum Stable Marriage

    Directory of Open Access Journals (Sweden)

    Zoltán Király

    2013-08-01

    Full Text Available We consider a two-sided market under incomplete preference lists with ties, where the goal is to find a maximum size stable matching. The problem is APX-hard, and a 3/2-approximation was given by McDermid [1]. This algorithm has a non-linear running time, and, more importantly needs global knowledge of all preference lists. We present a very natural, economically reasonable, local, linear time algorithm with the same ratio, using some ideas of Paluch [2]. In this algorithm every person make decisions using only their own list, and some information asked from members of these lists (as in the case of the famous algorithm of Gale and Shapley. Some consequences to the Hospitals/Residents problem are also discussed.

  9. Factorial and cumulant moments in e{sup +}e{sup {minus}} {yields} hadrons at the Z{sup 0} resonance

    Energy Technology Data Exchange (ETDEWEB)

    The SLD Collaboration

    1995-06-01

    The ratio of cumulant to factorial moments of the charged particle multiplicity distribution in hadronic Z{sup 0} decays has been measured in the SLD experiment at SLAC. The data were corrected for effects introduced by the detector. We find that this ratio, as a function of the moment rank q, decreases sharply to a negative minimum at q{approx}5, followed by a sequence of quasi-oscillations. We show that the truncation of the tail of the multiplicity distribution due to finite statistics only has a small effect on this result. The observed features are in qualitative agreement with expectations from higher order perturbative QCD.

  10. Error Analysis on the Estimation of Cumulative Infiltration in Soil Using Green and AMPT Model

    Directory of Open Access Journals (Sweden)

    Muhamad Askari

    2006-08-01

    Full Text Available Green and Ampt infiltration model is still useful for the infiltration process because of a clear physical basis of the model and of the existence of the model parameter values for a wide range of soil. The objective of thise study was to analyze error on the esimation of cumulative infiltration in sooil using Green and Ampt model and to design laboratory experiment in measuring cumulative infiltration. Parameter of the model was determined based on soil physical properties from laboratory experiment. Newton –Raphson method was esed to estimate wetting front during calculation using visual Basic for Application (VBA in MS Word. The result showed that  contributed the highest error in estimation of cumulative infiltration and was followed by K, H0, H1, and t respectively. It also showed that the calculated cumulative infiltration is always lower than both measured cumulative infiltration and volumetric soil water content.

  11. Managing regional cumulative effects of oil sands development in Alberta, Canada

    International Nuclear Information System (INIS)

    Spaling, H.; Zwier, J.

    2000-01-01

    This paper demonstrates an approach to regional cumulative effects management using the case of oil sands development in Alberta, Canada. The 17 existing, approved, or planned projects, all concentrated in a relatively small region, pose significant challenges for conducting and reviewing cumulative effects assessment (CEA) on a project-by-project basis. In response, stakeholders have initiated a regional cumulative effects management system that is among the first such initiatives anywhere. Advantages of this system include (1) more efficient gathering and sharing of information, including a common regional database, (2) setting acceptable regional environmental thresholds for all projects, (3) collaborative assessment of similar cumulative effects from related projects, (4) co-ordinated regulatory review and approval process for overlapping CEAs, and (5) institutional empowerment from a Regional Sustainable Development Strategy administered by a public authority. This case provides a model for integrating project-based CEA with regional management of cumulative effects. (author)

  12. Cumulative second-harmonic generation of Lamb waves propagating in a two-layered solid plate

    International Nuclear Information System (INIS)

    Xiang Yanxun; Deng Mingxi

    2008-01-01

    The physical process of cumulative second-harmonic generation of Lamb waves propagating in a two-layered solid plate is presented by using the second-order perturbation and the technique of nonlinear reflection of acoustic waves at an interface. In general, the cumulative second-harmonic generation of a dispersive guided wave propagation does not occur. However, the present paper shows that the second-harmonic of Lamb wave propagation arising from the nonlinear interaction of the partial bulk acoustic waves and the restriction of the three boundaries of the solid plates does have a cumulative growth effect if some conditions are satisfied. Through boundary condition and initial condition of excitation, the analytical expression of cumulative second-harmonic of Lamb waves propagation is determined. Numerical results show the cumulative effect of Lamb waves on second-harmonic field patterns. (classical areas of phenomenology)

  13. The Maximum Flux of Star-Forming Galaxies

    Science.gov (United States)

    Crocker, Roland M.; Krumholz, Mark R.; Thompson, Todd A.; Clutterbuck, Julie

    2018-04-01

    The importance of radiation pressure feedback in galaxy formation has been extensively debated over the last decade. The regime of greatest uncertainty is in the most actively star-forming galaxies, where large dust columns can potentially produce a dust-reprocessed infrared radiation field with enough pressure to drive turbulence or eject material. Here we derive the conditions under which a self-gravitating, mixed gas-star disc can remain hydrostatic despite trapped radiation pressure. Consistently taking into account the self-gravity of the medium, the star- and dust-to-gas ratios, and the effects of turbulent motions not driven by radiation, we show that galaxies can achieve a maximum Eddington-limited star formation rate per unit area \\dot{Σ }_*,crit ˜ 10^3 M_{⊙} pc-2 Myr-1, corresponding to a critical flux of F*, crit ˜ 1013L⊙ kpc-2 similar to previous estimates; higher fluxes eject mass in bulk, halting further star formation. Conversely, we show that in galaxies below this limit, our one-dimensional models imply simple vertical hydrostatic equilibrium and that radiation pressure is ineffective at driving turbulence or ejecting matter. Because the vast majority of star-forming galaxies lie below the maximum limit for typical dust-to-gas ratios, we conclude that infrared radiation pressure is likely unimportant for all but the most extreme systems on galaxy-wide scales. Thus, while radiation pressure does not explain the Kennicutt-Schmidt relation, it does impose an upper truncation on it. Our predicted truncation is in good agreement with the highest observed gas and star formation rate surface densities found both locally and at high redshift.

  14. Noise and physical limits to maximum resolution of PET images

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es

    2007-10-01

    In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.

  15. Noise and physical limits to maximum resolution of PET images

    International Nuclear Information System (INIS)

    Herraiz, J.L.; Espana, S.; Vicente, E.; Vaquero, J.J.; Desco, M.; Udias, J.M.

    2007-01-01

    In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners

  16. Synergistic effect of cumulative corticosteroid dose and immunosuppressants on avascular necrosis in patients with systemic lupus erythematosus.

    Science.gov (United States)

    Kwon, H H; Bang, S Y; Won, S; Park, Y; Yi, J H; Joo, Y B; Lee, H S; Bae, S C

    2018-01-01

    Objectives Avascular necrosis (AVN) is one of the most common causes of organ damage in patients with systemic lupus erythematosus (SLE) and often causes serious physical disability. The aims of this study were to investigate clinical risk factors associated with symptomatic AVN and to analyze their synergistic effects in a large SLE cohort in Korea. Methods Patients with SLE were enrolled and followed from 1998 to 2014 in the Hanyang BAE Lupus cohort, and damage was measured annually according to the Systemic Lupus International Collaborating Clinics/American College of Rheumatology Damage Index (SDI). AVN was confirmed by imaging study if patients had symptoms. To determine risk factors for AVN, clinical, laboratory and therapeutic variables were analyzed by logistic regression. Relative excess risk due to interaction (RERI), attributable proportion (AP), and synergy index (S) were calculated to measure interactions between significant variables. Results Among 1219 SLE patients, symptomatic AVN was the most common type of musculoskeletal damage (10.8%, n = 132). SLE patients with AVN showed an earlier onset age, demonstrated AVN more commonly in conjunction with certain other clinical manifestations such as renal and neuropsychiatric disorders, and received significantly higher total cumulative corticosteroid dose and immunosuppressive agents than did patients without AVN. However, in multivariable analysis, only two variables including use of a cumulative corticosteroid dose greater than 20 g (odds ratio (OR) 3.62, p = 0.015) and use of immunosuppressants including cyclophosphamide or mycophenolate mofetil (OR 4.51, p AVN. Patients with cumulative corticosteroid dose > 20 g and immunosuppressant use had a 15.44-fold increased risk for AVN, compared with patients without these risk factors ( p AVN in our Korean lupus cohort. Conclusions An individual risk assessment for AVN development should be made prior to and during treatment for SLE

  17. Pu-239 and Pu-240 inventories and Pu-240/ Pu-239 atom ratios in the water column off Sanriku, Japan.

    Science.gov (United States)

    Yamada, Masatoshi; Zheng, Jian; Aono, Tatsuo

    2013-04-01

    A magnitude 9.0 earthquake and subsequent tsunami occurred in the Pacific Ocean off northern Honshu, Japan, on 11 March 2011 which caused severe damage to the Fukushima Dai-ichi Nuclear Power Plant. This accident has resulted in a substantial release of radioactive materials to the atmosphere and ocean, and has caused extensive contamination of the environment. However, no information is available on the amounts of radionuclides such as Pu isotopes released into the ocean at this time. Investigating the background baseline concentration and atom ratio of Pu isotopes in seawater is important for assessment of the possible contamination in the marine environment. Pu-239 (half-life: 24,100 years), Pu-240 (half-life: 6,560 years) and Pu-241 (half-life: 14.325 years) mainly have been released into the environment as the result of atmospheric nuclear weapons testing. The atom ratio of Pu-240/Pu-239 is a powerful fingerprint to identify the sources of Pu in the ocean. The Pu-239 and Pu-240 inventories and Pu-240/Pu-239 atom ratios in seawater samples collected in the western North Pacific off Sanriku before the accident at Fukushima Dai-ichi Nuclear Power Plant will provide useful background baseline data for understanding the process controlling Pu transport and for distinguishing additional Pu sources. Seawater samples were collected with acoustically triggered quadruple PVC sampling bottles during the KH-98-3 cruise of the R/V Hakuho-Maru. The Pu-240/Pu-239 atom ratios were measured with a double-focusing SF-ICP-MS, which was equipped with a guard electrode to eliminate secondary discharge in the plasma and to enhance overall sensitivity. The Pu-239 and Pu-240 concentrations were 2.07 and 1.67 mBq/m3 in the surface water, respectively, and increased with depth; a subsurface maximum was identified at 750 m depth, and the concentrations decreased with depth, then increased at the bottom layer. The total Pu-239+240 inventory in the entire water column (depth interval 0

  18. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  19. Cumulative Effect of Obesogenic Behaviours on Adiposity in Spanish Children and Adolescents

    Directory of Open Access Journals (Sweden)

    Helmut Schröder

    2017-12-01

    Full Text Available Objective: Little is known about the cumulative effect of obesogenic behaviours on childhood obesity risk. We determined the cumulative effect on BMI z-score, waist-to-height ratio (WHtR, overweight and abdominal obesity of four lifestyle behaviours that have been linked to obesity. Methods: In this cross-sectional analysis, data were obtained from the EnKid sudy, a representative sample of Spanish youth. The study included 1,614 boys and girls aged 5-18 years. Weight, height and waist circumference were measured. Physical activity (PA, screen time, breakfast consumption and meal frequency were self-reported on structured questionnaires. Obesogenic behaviours were defined as z-score was computed using age- and sex-specific reference values from the World Health Organization (WHO. Overweight including obesity was defined as a BMI > 1 SD from the mean of the WHO reference population. Abdominal obesity was defined as a WHtR ≥ 0.5. Results: High screen time was the most prominent obesogenic behaviour (49.7%, followed by low physical activity (22.4%, low meal frequency (14.4%, and skipping breakfast (12.5%. Although 33% of participants were free of all 4 obesogenic behaviours, 1, 2, and 3 or 4 behaviours were reported by 44.5%, 19.3%, and 5.0%, respectively. BMI z-score and WHtR were positively associated (p < 0.001 with increasing numbers of concurrent obesogenic behaviours. The odds of presenting with obesogenic behaviours were significantly higher in children who were overweight (OR 2.68; 95% CI 1.50; 4.80 or had abdominal obesity (OR 2.12; 95% CI 1.28; 3.52; they reported more than 2 obesogenic behaviours. High maternal and parental education was inversely associated (p = 0.004 and p < 0.001, respectively with increasing presence of obesogenic behaviours. Surrogate markers of adiposity increased with numbers of concurrent presence of obesogenic behaviours. The opposite was true for high maternal and paternal education.

  20. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  1. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  2. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  3. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  4. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  5. Preliminary application of maximum likelihood method in HL-2A Thomson scattering system

    International Nuclear Information System (INIS)

    Yao Ke; Huang Yuan; Feng Zhen; Liu Chunhua; Li Enping; Nie Lin

    2010-01-01

    Maximum likelihood method to process the data of HL-2A Thomson scattering system is presented. Using mathematical statistics, this method maximizes the possibility of the likeness between the theoretical data and the observed data, so that we could get more accurate result. It has been proved to be applicable in comparison with that of the ratios method, and some of the drawbacks in ratios method do not exist in this new one. (authors)

  6. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  7. Cumulative total effective whole-body radiation dose in critically ill patients.

    Science.gov (United States)

    Rohner, Deborah J; Bennett, Suzanne; Samaratunga, Chandrasiri; Jewell, Elizabeth S; Smith, Jeffrey P; Gaskill-Shipley, Mary; Lisco, Steven J

    2013-11-01

    Uncertainty exists about a safe dose limit to minimize radiation-induced cancer. Maximum occupational exposure is 20 mSv/y averaged over 5 years with no more than 50 mSv in any single year. Radiation exposure to the general population is less, but the average dose in the United States has doubled in the past 30 years, largely from medical radiation exposure. We hypothesized that patients in a mixed-use surgical ICU (SICU) approach or exceed this limit and that trauma patients were more likely to exceed 50 mSv because of frequent diagnostic imaging. Patients admitted into 15 predesignated SICU beds in a level I trauma center during a 30-day consecutive period were prospectively observed. Effective dose was determined using Huda's method for all radiography, CT imaging, and fluoroscopic examinations. Univariate and multivariable linear regressions were used to analyze the relationships between observed values and outcomes. Five of 74 patients (6.8%) exceeded exposures of 50 mSv. Univariate analysis showed trauma designation, length of stay, number of CT scans, fluoroscopy minutes, and number of general radiographs were all associated with increased doses, leading to exceeding occupational exposure limits. In a multivariable analysis, only the number of CT scans and fluoroscopy minutes remained significantly associated with increased whole-body radiation dose. Radiation levels frequently exceeded occupational exposure standards. CT imaging contributed the most exposure. Health-care providers must practice efficient stewardship of radiologic imaging in all critically ill and injured patients. Diagnostic benefit must always be weighed against the risk of cumulative radiation dose.

  8. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  9. Are Pressure Time Integral and Cumulative Plantar Stress Related to First Metatarsophalangeal Joint Pain? Results From a Community-Based Study.

    Science.gov (United States)

    Rao, Smita; Douglas Gross, K; Niu, Jingbo; Nevitt, Michael C; Lewis, Cora E; Torner, James C; Hietpas, Jean; Felson, David; Hillstrom, Howard J

    2016-09-01

    To examine the relationship between plantar stress over a step, cumulative plantar stress over a day, and first metatarsophalangeal (MTP) joint pain among older adults. Plantar stress and first MTP pain were assessed within the Multicenter Osteoarthritis Study. All included participants were asked if they had pain, aching, or stiffness at the first MTP joint on most days for the past 30 days. Pressure time integral (PTI) was quantified as participants walked on a pedobarograph, and mean steps per day were obtained using an accelerometer. Cumulative plantar stress was calculated as the product of regional PTI and mean steps per day. Quintiles of hallucal and second metatarsal PTI and cumulative plantar stress were generated. The relationship between predictors and the odds ratio of first MTP pain was assessed using a logistic regression model. Feet in the quintile with the lowest hallux PTI had 2.14 times increased odds of first MTP pain (95% confidence interval [95% CI] 1.42-3.25, P pain (95% CI 1.01-2.23, P = 0.042). Cumulative plantar stress was unassociated with first MTP pain. Lower PTI was modestly associated with increased prevalence of frequent first MTP pain at both the hallux and second metatarsal. Lower plantar loading may indicate the presence of an antalgic gait strategy and may reflect an attempt at pain avoidance. The lack of association with cumulative plantar stress may suggest that patients do not limit their walking as a pain-avoidance mechanism. © 2016, American College of Rheumatology.

  10. Maximum Power Point Tracking in Variable Speed Wind Turbine Based on Permanent Magnet Synchronous Generator Using Maximum Torque Sliding Mode Control Strategy

    Institute of Scientific and Technical Information of China (English)

    Esmaeil Ghaderi; Hossein Tohidi; Behnam Khosrozadeh

    2017-01-01

    The present study was carried out in order to track the maximum power point in a variable speed turbine by minimizing electromechanical torque changes using a sliding mode control strategy.In this strategy,fhst,the rotor speed is set at an optimal point for different wind speeds.As a result of which,the tip speed ratio reaches an optimal point,mechanical power coefficient is maximized,and wind turbine produces its maximum power and mechanical torque.Then,the maximum mechanical torque is tracked using electromechanical torque.In this technique,tracking error integral of maximum mechanical torque,the error,and the derivative of error are used as state variables.During changes in wind speed,sliding mode control is designed to absorb the maximum energy from the wind and minimize the response time of maximum power point tracking (MPPT).In this method,the actual control input signal is formed from a second order integral operation of the original sliding mode control input signal.The result of the second order integral in this model includes control signal integrity,full chattering attenuation,and prevention from large fluctuations in the power generator output.The simulation results,calculated by using MATLAB/m-file software,have shown the effectiveness of the proposed control strategy for wind energy systems based on the permanent magnet synchronous generator (PMSG).

  11. Intermediate Results Of The Program On Realization Of High-Power Soft X-ray Radiation Source Powered From Magneto-Cumulative Generators

    International Nuclear Information System (INIS)

    Selemir, V.D.; Demidov, V.A.; Ermolovich, V.F.; Spirov, G.M.; Repin, P.B.; Pikulin, I.V.; Volkov, A.A.; Orlov, A.P.; Boriskin, A.S.; Tatsenko, O.M.; Markevtsev, I.M.; Moiseenko, A.N.; Kazakov, S.A.; Selyavsky, V.T.; Shapovalov, E.V.; Giterman, B.P.; Vlasov, Yu.V.; Dydykin, P.S.; Ryaslov, E.A.; Kotelnikov, D.V.

    2006-01-01

    In the paper we discuss experiments on wire liner systems powering from helical and disk magneto-cumulative generators with a current from 2...3 MA up to 20 MA at current rise time from 0.3 μs to 1 μs, respectively. At currents level up to 4 MA maximum yield of soft x-ray radiation was more than 100 kJ at plasma pinch temperature of 55 eV. At currents up to 20 MA an expected yield of soft x-ray radiation exceeds 1 MJ

  12. Assessing environmental impacts on stream water quality: the use of cumulative flux and cumulative flux difference approaches to deforestation of the Hafren Forest, mid-Wales

    Directory of Open Access Journals (Sweden)

    C. Neal

    2002-01-01

    Full Text Available A method for examining the impacts of disturbance on stream water quality based on paired catchment “controlâ€? and “responseâ€? water quality time series is described in relation to diagrams of cumulative flux and cumulative flux difference. The paper describes the equations used and illustrates the patterns expected for idealised flux changes followed by an application to stream water quality data for a spruce forested catchment, the Hore, subjected to clear fell. The water quality determinands examined are sodium, chloride, nitrate, calcium and acid neutralisation capacity. The anticipated effects of felling are shown in relation to reduction in mist capture and nitrate release with felling as well as to the influence of weathering and cation exchange mechanisms, but in a much clearer way than observed previously using other approaches. Keywords: Plynlimon, stream, Hore, acid neutralisation capacity, calcium, chloride, nitrate, sodium, cumulative flux, flux

  13. Elaboration of a concept for the cumulative environmental exposure assessment of biocides

    Energy Technology Data Exchange (ETDEWEB)

    Gross, Rita; Bunke, Dirk; Moch, Katja [Oeko-Institut e.V. - Institut fuer Angewandte Oekologie e.V., Freiburg im Breisgau (Germany); Gartiser, Stefan [Hydrotox GmbH, Freiburg im Breisgau (Germany)

    2011-12-15

    Article 10(1) of the EU Biocidal Products Directive 98/8/EC (BPD) requires that for the inclusion of an active substance in Annex I, Annex IA or IB, cumulation effects from the use of biocidal products containing the same active substance shall be taken into account, where relevant. The study proves the feasibility of a technical realisation of Article 10(1) of the BPD and elaborates a first concept for the cumulative environmental exposure assessment of biocides. Existing requirements concerning cumulative assessments in other regulatory frameworks have been evaluated and their applicability for biocides has been examined. Technical terms and definitions used in this context were documented with the aim to harmonise terminology with other frameworks and to set up a precise definition within the BPD. Furthermore, application conditions of biocidal products have been analysed to find out for which cumulative exposure assessments may be relevant. Different parameters were identified which might serve as indicators for the relevance of cumulative exposure assessments. These indicators were then integrated in a flow chart by means of which the relevance of cumulative exposure assessments can be checked. Finally, proposals for the technical performance of cumulative exposure assessments within the Review Programme have been elaborated with the aim to bring the results of the project into the upcoming development and harmonization processes on EU level. (orig.)

  14. CUMULATE ROCKS ASSOCIATED WITH CARBONATE ASSIMILATION, HORTAVÆR COMPLEX, NORTH-CENTRAL NORWAY

    Science.gov (United States)

    Barnes, C. G.; Prestvik, T.; Li, Y.

    2009-12-01

    The Hortavær igneous complex intruded high-grade metamorphic rocks of the Caledonian Helgeland Nappe Complex at ca. 466 Ma. The complex is an unusual mafic-silicic layered intrusion (MASLI) because the principal felsic rock type is syenite and because the syenite formed in situ rather than by deep-seated partial melting of crustal rocks. Magma differentiation in the complex was by assimilation, primarily of calc-silicate rocks and melts with contributions from marble and semi-pelites, plus fractional crystallization. The effect of assimilation of calcite-rich rocks was to enhance stability of fassaitic clinopyroxene at the expense of olivine, which resulted in alkali-rich residual melts and lowering of silica activity. This combination of MASLI-style emplacement and carbonate assimilation produced three types of cumulate rocks: (1) Syenitic cumulates formed by liquid-crystal separation. As sheets of mafic magma were loaded on crystal-rich syenitic magma, residual liquid was expelled, penetrating the overlying mafic sheets in flame structures, and leaving a cumulate syenite. (2) Reaction cumulates. Carbonate assimilation, illustrated by a simple assimilation reaction: olivine + calcite + melt = clinopyroxene + CO2 resulted in cpx-rich cumulates such as clinopyroxenite, gabbro, and mela-monzodiorite, many of which contain igneous calcite. (3) Magmatic skarns. Calc-silicate host rocks underwent partial melting during assimilation, yielding a Ca-rich melt as the principal assimilated material and permitting extensive reaction with surrounding magma to form Kspar + cpx + garnet-rich ‘cumulate’ rocks. Cumulate types (2) and (3) do not reflect traditional views of cumulate rocks but instead result from a series of melt-present discontinuous (peritectic) reactions and partial melting of calc-silicate xenoliths. In the Hortavær complex, such cumulates are evident because of the distinctive peritectic cumulate assemblages. It is unclear whether assimilation of

  15. Calculate the maximum expected dose for technical radio physicists a cobalt machine

    International Nuclear Information System (INIS)

    Avila Avila, Rafael; Perez Velasquez, Reytel; Gonzalez Lapez, Nadia

    2009-01-01

    Considering the daily operations carried out by technicians Radiophysics Medical Service Department of Radiation Oncology Hospital V. General Teaching I. Lenin in the city of Holguin, during a working week (Between Monday and Friday) as an important element in calculating the maximum expected dose (MDE). From the exponential decay law which is subject the source activity, we propose corrections to the cumulative doses in the weekly period, leading to obtaining a formula which takes into a cumulative dose during working days and sees no dose accumulation of rest days (Saturday and Sunday). The estimate factor correction is made from a power series expansion convergent is truncated at the n-th term coincides with the week period for which you want to calculate the dose. As initial condition is adopted ambient dose equivalent rate as a given, which allows estimate MDE in the moments after or before this. Calculations were proposed use of an Excel spreadsheet that allows simple and accessible processing the formula obtained. (author)

  16. Maximum likelihood estimation of ancestral codon usage bias parameters in Drosophila

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Bauer DuMont, Vanessa L; Hubisz, Melissa J

    2007-01-01

    : the selection coefficient for optimal codon usage (S), allowing joint maximum likelihood estimation of S and the dN/dS ratio. We apply the method to previously published data from Drosophila melanogaster, Drosophila simulans, and Drosophila yakuba and show, in accordance with previous results, that the D...

  17. Lower Bounds on the Maximum Energy Benefit of Network Coding for Wireless Multiple Unicast

    NARCIS (Netherlands)

    Goseling, J.; Matsumoto, R.; Uyematsu, T.; Weber, J.H.

    2010-01-01

    We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding

  18. Lower bounds on the maximum energy benefit of network coding for wireless multiple unicast

    NARCIS (Netherlands)

    Goseling, Jasper; Matsumoto, Ryutaroh; Uyematsu, Tomohiko; Weber, Jos H.

    2010-01-01

    We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding

  19. On the design of experimental separation processes for maximum accuracy in the estimation of their parameters

    International Nuclear Information System (INIS)

    Volkman, Y.

    1980-07-01

    The optimal design of experimental separation processes for maximum accuracy in the estimation of process parameters is discussed. The sensitivity factor correlates the inaccuracy of the analytical methods with the inaccuracy of the estimation of the enrichment ratio. It is minimized according to the design parameters of the experiment and the characteristics of the analytical method

  20. Experimental demonstration of the maximum likelihood-based chromatic dispersion estimator for coherent receivers

    DEFF Research Database (Denmark)

    Borkowski, Robert; Johannisson, Pontus; Wymeersch, Henk

    2014-01-01

    We perform an experimental investigation of a maximum likelihood-based (ML-based) algorithm for bulk chromatic dispersion estimation for digital coherent receivers operating in uncompensated optical networks. We demonstrate the robustness of the method at low optical signal-to-noise ratio (OSNR...

  1. The Implementation of Cumulative Learning Theory in Calculating Triangular Prism and Tube Volumes

    Science.gov (United States)

    Muklis, M.; Abidin, C.; Pamungkas, M. D.; Masriyah

    2018-01-01

    This study aims at describing the application of cumulative learning theory in calculating the volume of a triangular prism and a tube as well as revealing the students’ responses toward the learning. The research method used was descriptive qualitative with elementary school students as the subjects of the research. Data obtained through observation, field notes, questionnaire, tests, and interviews. The results from the application of cumulative learning theory obtained positive students’ responses in following the learning and students’ learning outcomes was dominantly above the average. This showed that cumulative learning could be used as a reference to be implemented in learning, so as to improve the students’ achievement.

  2. Structure functions and particle production in the cumulative region: two different exponentials

    International Nuclear Information System (INIS)

    Braun, M.; Vechernin, V.

    1997-01-01

    In the framework of the recently proposed (QCD-based parton model for the cumulative phenomena in the interactions with nuclei two mechanisms for particle production, direct and spectator ones, are analyzed. It is shown that due to final-state interactions the leading terms of the direct mechanism contribution are cancelled and the spectator mechanism is the dominant one. It leads to a smaller slope of the cumulative particle production rates compared to the slope of the nuclear structure function in the cumulative region x ≥ 1, in agreement with the recent experimental data

  3. Maximum mass of magnetic white dwarfs

    International Nuclear Information System (INIS)

    Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez

    2015-01-01

    We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)

  4. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  5. Maximum Margin Clustering of Hyperspectral Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  6. Paving the road to maximum productivity.

    Science.gov (United States)

    Holland, C

    1998-01-01

    "Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.

  7. Maximum power flux of auroral kilometric radiation

    International Nuclear Information System (INIS)

    Benson, R.F.; Fainberg, J.

    1991-01-01

    The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3

  8. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  9. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  10. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  11. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  12. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  13. Method for calculating individual equivalent doses and cumulative dose of population in the vicinity of nuclear power plant site

    International Nuclear Information System (INIS)

    Namestek, L.; Khorvat, D; Shvets, J.; Kunz, Eh.

    1976-01-01

    A method of calculating the doses of external and internal person irradiation in the nuclear power plant vicinity under conditions of normal operation and accident situations has been described. The main difference between the above method and methods used up to now is the use of a new antropomorphous representation of a human body model together with all the organs. The antropomorphous model of human body and its organs is determined as a set of simple solids, coordinates of disposistion of the solids, sizes, masses, densities and composition corresponding the genuine organs. The use of the Monte-Carlo method is the second difference. The results of the calculations according to the model suggested can be used for determination: a critical group of inhabitans under conditions of normal plant operation; groups of inhabitants most subjected to irradiation in the case of possible accident; a critical sector with a maximum collective dose in the case of an accident; a critical radioisotope favouring the greatest contribution to an individual equivalent dose; critical irradiation ways promoting a maximum contribution to individual equivalent doses; cumulative collective doses for the whole region or for a chosen part of the region permitting to estimate a population dose. The consequent method evoluation suggests the development of separate units of the calculationg program, critical application and the selection of input data of physical, plysiological and ecological character and improvement of the calculated program for the separate concrete events [ru

  14. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  15. In vitro fermentation characteristics of diets with different forage/concentrate ratios: comparison of rumen and faecal inocula.

    Science.gov (United States)

    Zicarelli, Fabio; Calabrò, Serena; Cutrignelli, Monica I; Infascelli, Federico; Tudisco, Raffaella; Bovera, Fulvia; Piccolo, Vincenzo

    2011-05-01

    The aim of this trial was to evaluate the replacement of rumen fluid with faeces as inoculum in studying the in vitro fermentation characteristics of diets for ruminants using the in vitro gas production technique. Six iso-protein diets with different forage/concentrate ratios were incubated with rumen fluid (RI) or faeces (FI) collected from sheep. Most of the fermentation parameters were influenced by diet and inoculum (P < 0.01). With both inocula, organic matter degradability (dOM), cumulative gas production (OMCV) and maximum fermentation rate (R(max) ) increased as the amount of concentrate in the diet increased. R(max) was lower with FI vs RI (P < 0.01); dOM was higher with FI vs RI and the diet × inoculum interaction was significant. As expected, with both inocula, R(max) increased as the neutral detergent fibre content of the diet decreased. Significant correlations were obtained using both inocula between OMCV/dOM and gas/volatile fatty acid (VFA), while the correlation VFA/dOM was significant only with FI. The microbial biomass yield calculated by stoichiometric analysis for all diets was higher with FI vs RI. With FI the organic matter used for microbial growth showed an overall decreasing trend as the amount of concentrate in the diet increased. The results indicate that both faeces and rumen fluid from sheep have the potential to be used as inoculum for the in vitro gas production technique. Copyright © 2011 Society of Chemical Industry.

  16. Excess mortality in treated and untreated hyperthyroidism is related to cumulative periods of low serum TSH

    DEFF Research Database (Denmark)

    Lillevang-Johansen, Mads; Abrahamsen, Bo; Jørgensen, Henrik Løvendahl

    2017-01-01

    Introduction and Aim: Cumulative time-dependent excess mortality in hyperthyroid patients has been suggested. However, the effect of anti-thyroid treatment on mortality, especially in subclinical hyperthyroidism remains unclarified. We investigated the association between hyperthyroidism and mort...

  17. 76 FR 69726 - Pyrethrins/Pyrethroid Cumulative Risk Assessment; Notice of Availability

    Science.gov (United States)

    2011-11-09

    ... exposure to multiple chemicals that have a common mechanism of toxicity when making regulatory decisions... stakeholders including environmental, human health, farm worker, and agricultural advocates; the chemical... Cumulative Risk Assessment; Notice of Availability AGENCY: Environmental Protection Agency (EPA). ACTION...

  18. Evaluating Cumulative Ecosystem Response to Restoration Projects in the Columbia River Estuary, Annual Report 2007

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Gary E.; Diefenderfer, Heida L.; Borde, Amy B.; Dawley, Earl M.; Ebberts, Blaine D.; Putman, Douglas A.; Roegner, G. C.; Russell, Micah; Skalski, John R.; Thom, Ronald M.; Vavrinec, John

    2008-10-01

    The goal of this multi-year study (2004-2010) is to develop a methodology to evaluate the cumulative effects of multiple habitat restoration projects intended to benefit ecosystems supporting juvenile salmonids in the lower Columbia River and estuary. Literature review in 2004 revealed no existing methods for such an evaluation and suggested that cumulative effects could be additive or synergistic. Field research in 2005, 2006, and 2007 involved intensive, comparative studies paired by habitat type (tidal swamp vs. marsh), trajectory (restoration vs. reference site), and restoration action (tide gate vs. culvert vs. dike breach). The field work established two kinds of monitoring indicators for eventual cumulative effects analysis: core and higher-order indicators. Management implications of limitations and applications of site-specific effectiveness monitoring and cumulative effects analysis were identified.

  19. Cumulative cisplatin dose in concurrent chemoradiotherapy for head and neck cancer : A systematic review

    NARCIS (Netherlands)

    Strojan, Primoz; Vermorken, Jan B.; Beitler, Jonathan J.; Saba, Nabil F.; Haigentz, Missak; Bossi, Paolo; Worden, Francis P.; Langendijk, Johannes A.; Eisbruch, Avraham; Mendenhall, William M.; Lee, Anne W. M.; Harrison, Louis B.; Bradford, Carol R.; Smee, Robert; Silver, Carl E.; Rinaldo, Alessandra; Ferlito, Alfio

    Background. The optimal cumulative dose and timing of cisplatin administration in various concurrent chemoradiotherapy protocols for nonmetastatic head and neck squamous cell carcinoma (HNSCC) has not been determined. Methods. The absolute survival benefit at 5 years of concurrent chemoradiotherapy

  20. Some Additional Remarks on the Cumulant Expansion for Linear Stochastic Differential Equations

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.

    1984-01-01

    We summarize our previous results on cumulant expansions for linear stochastic differential equations with correlated multipliclative and additive noise. The application of the general formulas to equations with statistically independent multiplicative and additive noise is reconsidered in detail,