Maximum embryo absorbed dose from intravenous urography: interhospital variations
Damilakis, J.; Perisinakis, K. [University of Crete (Greece). Dept. of Medical Physics; Koukourakis, M. [University of Crete (Greece). Dept. of Radiology; Gourtsoyiannis, N. [University Hospital of Iraklion, Crete (Greece). Dept. of Radiotherapy
1997-12-01
The purpose of this study was to determine the maximum embryo dose during intravenous urography (IVU) examinations, when inadvertent irradiation of a pregnant woman occurs, and to investigate the variation of doses received from different institutions. Doses at average embryo depth from IVU examinations have been measured in four institutions using a Rando phantom and thermoluminescent crystals. In order to estimate the maximum range of embryo doses, radiologists were asked to carry out the examinations with the same technique as in female patients with acute ureteral obstruction. The range of doses estimated at embryo depth for the institutions participating in this study was 5.77 to 35.2 mGy. The considerable interhospital variation found in dose can be explained by different equipment and techniques used. A simple method of estimating embryo dose from pelvic radiographs reported previously was found to be also applicable to IVU examinations. Absorbed dose at 6 cm, the average embryo depth, was found significantly less than 50 mGy. (Author).
Maximum likelihood estimation for cytogenetic dose-response curves
Frome, E.L; DuFrain, R.J.
1983-10-01
In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.
Report on MPACT Deliverable M3FT-16LA040106035 (High Dose Evaluation of Improved PDT Detector Pod)
Menlove, Howard Olsen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henzlova, Daniela [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-10-18
This report provides the results for the initial high gamma dose tests for the boron-10 plate detector that was fabricated by PDT, Inc. under contract to LANL The specifications for the detector were developed using MCNP code simulations and prior experimental tests at LANL. The goal in the development was to provide high neutron detection efficiency together with gamma-ray resistance at very high gamma dose levels that are characteristic of the electrochemical fuel processing activity.
Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)
2009-02-19
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic
Norris, David C
2017-01-01
Background. Absent adaptive, individualized dose-finding in early-phase oncology trials, subsequent 'confirmatory' Phase III trials risk suboptimal dosing, with resulting loss of statistical power and reduced probability of technical success for the investigational therapy. While progress has been made toward explicitly adaptive dose-finding and quantitative modeling of dose-response relationships, most such work continues to be organized around a concept of 'the' maximum tolerated dose (MTD). The purpose of this paper is to demonstrate concretely how the aim of early-phase trials might be conceived, not as 'dose-finding', but as dose titration algorithm (DTA)-finding. Methods. A Phase I dosing study is simulated, for a notional cytotoxic chemotherapy drug, with neutropenia constituting the critical dose-limiting toxicity. The drug's population pharmacokinetics and myelosuppression dynamics are simulated using published parameter estimates for docetaxel. The amenability of this model to linearization is explored empirically. The properties of a simple DTA targeting neutrophil nadir of 500 cells/mm (3) using a Newton-Raphson heuristic are explored through simulation in 25 simulated study subjects. Results. Individual-level myelosuppression dynamics in the simulation model approximately linearize under simple transformations of neutrophil concentration and drug dose. The simulated dose titration exhibits largely satisfactory convergence, with great variance in individualized optimal dosing. Some titration courses exhibit overshooting. Conclusions. The large inter-individual variability in simulated optimal dosing underscores the need to replace 'the' MTD with an individualized concept of MTD i . To illustrate this principle, the simplest possible DTA capable of realizing such a concept is demonstrated. Qualitative phenomena observed in this demonstration support discussion of the notion of tuning such algorithms. Although here illustrated specifically in relation to
Kim, Leonard, E-mail: kimlh@umdnj.edu [Department of Radiation Oncology, Cancer Institute of New Jersey, Robert Wood Johnson Medical School, University of Medicine and Dentistry of New Jersey, New Brunswick, NJ (United States); Narra, Venkat; Yue, Ning [Department of Radiation Oncology, Cancer Institute of New Jersey, Robert Wood Johnson Medical School, University of Medicine and Dentistry of New Jersey, New Brunswick, NJ (United States)
2013-07-01
Recent studies have reported potentially clinically meaningful dose differences when heterogeneity correction is used in breast balloon brachytherapy. In this study, we report on the relationship between heterogeneity-corrected and -uncorrected doses for 2 commonly used plan evaluation metrics: maximum point dose to skin surface and maximum point dose to ribs. Maximum point doses to skin surface and ribs were calculated using TG-43 and Varian Acuros for 20 patients treated with breast balloon brachytherapy. The results were plotted against each other and fit with a zero-intercept line. Max skin dose (Acuros) = max skin dose (TG-43) ⁎ 0.930 (R{sup 2} = 0.995). The average magnitude of difference from this relationship was 1.1% (max 2.8%). Max rib dose (Acuros) = max rib dose (TG-43) ⁎ 0.955 (R{sup 2} = 0.9995). The average magnitude of difference from this relationship was 0.7% (max 1.6%). Heterogeneity-corrected maximum point doses to the skin surface and ribs were proportional to TG-43-calculated doses. The average deviation from proportionality was 1%. The proportional relationship suggests that a different metric other than maximum point dose may be needed to obtain a clinical advantage from heterogeneity correction. Alternatively, if maximum point dose continues to be used in recommended limits while incorporating heterogeneity correction, institutions without this capability may be able to accurately estimate these doses by use of a scaling factor.
Kim, Leonard; Narra, Venkat; Yue, Ning
2013-01-01
Recent studies have reported potentially clinically meaningful dose differences when heterogeneity correction is used in breast balloon brachytherapy. In this study, we report on the relationship between heterogeneity-corrected and -uncorrected doses for 2 commonly used plan evaluation metrics: maximum point dose to skin surface and maximum point dose to ribs. Maximum point doses to skin surface and ribs were calculated using TG-43 and Varian Acuros for 20 patients treated with breast balloon brachytherapy. The results were plotted against each other and fit with a zero-intercept line. Max skin dose (Acuros) = max skin dose (TG-43) * 0.930 (R(2) = 0.995). The average magnitude of difference from this relationship was 1.1% (max 2.8%). Max rib dose (Acuros) = max rib dose (TG-43) * 0.955 (R(2) = 0.9995). The average magnitude of difference from this relationship was 0.7% (max 1.6%). Heterogeneity-corrected maximum point doses to the skin surface and ribs were proportional to TG-43-calculated doses. The average deviation from proportionality was 1%. The proportional relationship suggests that a different metric other than maximum point dose may be needed to obtain a clinical advantage from heterogeneity correction. Alternatively, if maximum point dose continues to be used in recommended limits while incorporating heterogeneity correction, institutions without this capability may be able to accurately estimate these doses by use of a scaling factor.
Crom, Izan Le; Pecher, Arthur; Parmeggiani, Stefano
Deliverable 29 describes a newly proposed methodology to evaluate the power performances of wave (and tidal) energy converters. This methodology has been applied within the Wavetrain2 project to evaluate the production of two wave energy conversion plants: the Pico OWC Plant and the Wave Dragon...
DELIVERANCE IN "ROBINSON CRUSOE"
湯浅, 恭子; ユアサ, キョウコ; Kyoko, Yuasa
2003-01-01
This paper first studies the concept of deliverance in "Robinson Crusoe," written by the 18th century English novelist, Daniel Defoe, in comparison with three Hebrew figures in the Bible and then refers to the author's two writing styles to examine the value of the novel for Japanese readers whose society is not based on a monotheistic god.
Chozas, Julia Fernandez
This deliverable presents a selection of case studies on socio-economic aspects of wave energy, developed by Wavetrain2 partners SPOK and WavEC, that represent key areas of research during the WT2 period. More information and a few other case studies can be also found in the previous deliverables D...... Denmark (the Danish TSO) and counts with the interest and collaboration of the wave device developers Pelamis, Wave Dragon and Wavestar. The second case study assesses the macro-economic impact of the introduction of a 100MW farm in Portugal. An Input-Output model based on Leontief theory was developed...... to the case study of a 100MW farm of Pelamis devices installed in the coast of Portugal. A third case study refers to the application of the WavEC Techno-Economic Model for technology and project feasibility assessment to one particular wave energy technology. Due to confidential issues the results...
Chozas, Julia Fernandez
Denmark (the Danish TSO) and counts with the interest and collaboration of the wave device developers Pelamis, Wave Dragon and Wavestar. The second case study assesses the macro-economic impact of the introduction of a 100MW farm in Portugal. An Input-Output model based on Leontief theory was developed...... (HMRC) and Hans C. Sorensen (SPOK). It included 3 main parts: a first introduction to microeconomics at project level, a second practical session on the use of economic model Retscreen and a final session introducing macro and socio-economic aspects.......This deliverable presents a selection of case studies on socio-economic aspects of wave energy, developed by Wavetrain2 partners SPOK and WavEC, that represent key areas of research during the WT2 period. More information and a few other case studies can be also found in the previous deliverables D...
王雪丽; 陶剑; 史宁中
2005-01-01
The primary goal of a phase I clinical trial is to find the maximum tolerable dose of a treatment. In this paper, we propose a new stepwise method based on confidence bound and information incorporation to determine the maximum tolerable dose among given dose levels. On the one hand, in order to avoid severe even fatal toxicity to occur and reduce the experimental subjects, the new method is executed from the lowest dose level, and then goes on in a stepwise fashion. On the other hand,in order to improve the accuracy of the recommendation, the final recommendation of the maximum tolerable dose is accomplished through the information incorporation of an additional experimental cohort at the same dose level. Furthermore, empirical simulation results show that the new method has some real advantages in comparison with the modified continual reassessment method.
The maximum single dose of resistant maltodextrin that does not cause diarrhea in humans.
Kishimoto, Yuka; Kanahori, Sumiko; Sakano, Katsuhisa; Ebihara, Shukuko
2013-01-01
The objective of the present study was to determine the maximum dose of resistant maltodextrin (Fibersol)-2, a non-viscous water-soluble dietary fiber), that does not induce transitory diarrhea. Ten healthy adult subjects (5 men and 5 women) ingested Fibersol-2 at increasing dose levels of 0.7, 0.8, 0.9, 1.0, and 1.1 g/kg body weight (bw). Each administration was separated from the previous dose by an interval of 1 wk. The highest dose level that did not cause diarrhea in any subject was regarded as the maximum non-effective level for a single dose. The results showed that no subject of either sex experienced diarrhea at dose levels of 0.7, 0.8, 0.9, or 1.0 g/kg bw. At the highest dose level of 1.1 g/kg bw, no female subject experienced diarrhea, whereas 1 male subject developed diarrhea with muddy stools 2 h after ingestion of the test substance. Consequently, the maximum non-effective level for a single dose of the resistant maltodextrin Fibersol-2 is 1.0 g/kg bw for men and >1.1 g/kg bw for women. Gastrointestinal symptoms were gurgling sounds in 4 subjects (7 events) and flatus in 5 subjects (9 events), although no association with dose level was observed. These symptoms were mild and transient and resolved without treatment.
McFadden, Lisa G; Bartels, Michael J; Rick, David L; Price, Paul S; Fontaine, Donald D; Saghir, Shakil A
2012-07-01
Several statistical approaches were evaluated to identify an optimum method for determining a point of nonlinearity (PONL) in toxicokinetic data. (1) A second-order least squares regression model was fit iteratively starting with data from all doses. If the second order term was significant (αmodel was fit iteratively starting with data from all doses except the highest. The mean response for the omitted dose was compared to the 95% prediction interval. If the omitted dose falls outside the confidence interval it is an estimate of the PONL. (3) Slopes of least squares linear regression lines for sections of contiguous doses were compared. Nonlinearity was suggested when slopes of compared sections differed. A total of 33 dose-response datasets were evaluated. For these toxicokinetic data, the best statistical approach was the least squares regression analysis with a second-order term. Changing the α level for the second-order term and weighting the second-order analysis by the inverse of feed consumption were also considered. This technique has been shown to give reproducible identification of nonlinearities in TK datasets.
NONE
2000-07-01
The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance.
The disappearance of the pfotzer-regener maximum in dose equivalent measurements in the stratosphere
Hands, A. D. P.; Ryden, K. A.; Mertens, C. J.
2016-10-01
The NASA Radiation Dosimetry Experiment (RaD-X) successfully deployed four radiation detectors on a high-altitude balloon for a period of approximately 20 h. One of these detectors was the RaySure in-flight monitor, which is a solid-state instrument designed to measure ionizing dose rates to aircrew and passengers. Data from RaySure on RaD-X show absorbed dose rates rising steadily as a function of altitude up to a peak at approximately 60,000 feet, known as the Pfotzer-Regener maximum. Above this altitude absorbed dose rates level off before showing a small decline as the RaD-X balloon approaches its maximum altitude of around 125,000 feet. The picture for biological dose equivalent, however, is very different. At high altitudes the fraction of dose from highly ionizing particles increases significantly. Dose from these particles causes a disproportionate amount of biological damage compared to dose from more lightly ionizing particles, and this is reflected in the quality factors used to calculate the dose equivalent quantity. By calculating dose equivalent from RaySure data, using coefficients derived from previous calibrations, we show that there is no peak in the dose equivalent rate at the Pfotzer-Regener maximum. Instead, the dose equivalent rate keeps increasing with altitude as the influence of dose from primary cosmic rays becomes increasingly important. This result has implications for high altitude aviation, space tourism and, due to its thinner atmosphere, the surface radiation environment on Mars.
A fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation.
Li, Haisen S; Romeijn, H Edwin; Dempsey, James F
2006-09-01
We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near monoenergetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the
Menting, Stef P; Dekker, Paul M; Limpens, Jacqueline; Hooft, Lotty; Spuls, Phyllis I
2016-01-01
There is a range of methotrexate dosing regimens for psoriasis. This review summarizes the evidence for test-dose, start-dose, dosing scheme, dose adjustments, maximum dose and use of folic acid. A literature search for randomized controlled trials and guidelines was performed. Twenty-three randomized controlled trials (29 treatment groups) and 10 guidelines were included. Two treatment groups used a test-dose, 5 guidelines recommend it. The methotrexate start-dose in randomized controlled trials varied from 5 to 25 mg/week, most commonly being either 7.5 mg or 15 mg. Guidelines vary from 5 to 15 mg/week. Methotrexate was administered as a single dose or in a Weinstein schedule in 15 and 11 treatment-groups, respectively; both recommended equally in guidelines. A fixed dose (n = 18), predefined dose (n = 3), or dose adjusted on clinical improvement (n = 8) was used, the last also being recommended in guidelines. Ten treatment groups used folic acid; in 2 it was allowed, in 14 not mentioned, and in 3 no folic acid was used. Most guidelines recommend the use of folic acid. Authors' suggestions for methotrexate dosing are given.
Dowdy, John C; Czako, Eugene A; Stepp, Michael E; Schlitt, Steven C; Bender, Gregory R; Khan, Lateef U; Shinneman, Kenneth D; Karos, Manuel G; Shepherd, James G; Sayre, Robert M
2011-09-01
The authors compared calculations of sunlamp maximum exposure times following current USFDA Guidance Policy on the Maximum Timer Interval and Exposure Schedule, with USFDA/CDRH proposals revising these to equivalent erythemal exposures of ISO/CIE Standard Erythema Dose (SED). In 2003, [USFDA/CDRH proposed replacing their unique CDRH/Lytle] erythema action spectrum with the ISO/CIE erythema action spectrum and revising the sunlamp maximum exposure timer to 600 J m(-2) ISO/CIE effective dose, presented as being biologically equivalent. Preliminary analysis failed to confirm said equivalence, indicating instead ∼38% increased exposure when applying these proposed revisions. To confirm and refine this finding, a collaboration of tanning bed and UV lamp manufacturers compiled 89 UV spectra representing a broad sampling of U.S. indoor tanning equipment. USFDA maximum recommended exposure time (Te) per current sunlamp guidance and CIE erythemal effectiveness per ISO/CIE standard were calculated. The CIE effective dose delivered per Te averaged 456 J(CIE) m(-2) (SD = 0.17) or ∼4.5 SED. The authors found that CDRH's proposed 600 J(CIE) m(-2) recommended maximum sunlamp exposure exceeds current Te erythemal dose by ∼33%. The current USFDA 0.75 MED initial exposure was ∼0.9 SED, consistent with 1.0 SED initial dose in existing international sunlamp standards. As no sunlamps analyzed exceeded 5 SED, a revised maximum exposure of 500 J(CIE) m(-2) (∼80% of CDRH's proposal) should be compatible with existing tanning equipment. A tanning acclimatization schedule is proposed beginning at 1 SED thrice-weekly, increasing uniformly stepwise over 4 wk to a 5 SED maximum exposure in conjunction with a tan maintenance schedule of twice-weekly 5 SED sessions, as biologically equivalent to current USFDA sunlamp policy.
SU-E-T-578: On Definition of Minimum and Maximum Dose for Target Volume
Gong, Y; Yu, J; Xiao, Y [Thomas Jefferson University Hospital, Philadelphia, PA (United States)
2015-06-15
Purpose: This study aims to investigate the impact of different minimum and maximum dose definitions in radiotherapy treatment plan quality evaluation criteria by using tumor control probability (TCP) models. Methods: Dosimetric criteria used in RTOG 1308 protocol are used in the investigation. RTOG 1308 is a phase III randomized trial comparing overall survival after photon versus proton chemoradiotherapy for inoperable stage II-IIIB NSCLC. The prescription dose for planning target volume (PTV) is 70Gy. Maximum dose (Dmax) should not exceed 84Gy and minimum dose (Dmin) should not go below 59.5Gy in order for the plan to be “per protocol” (satisfactory).A mathematical model that simulates the characteristics of PTV dose volume histogram (DVH) curve with normalized volume is built. The Dmax and Dmin are noted as percentage volumes Dη% and D(100-δ)%, with η and d ranging from 0 to 3.5. The model includes three straight line sections and goes through four points: D95%= 70Gy, Dη%= 84Gy, D(100-δ)%= 59.5 Gy, and D100%= 0Gy. For each set of η and δ, the TCP value is calculated using the inhomogeneously irradiated tumor logistic model with D50= 74.5Gy and γ50=3.52. Results: TCP varies within 0.9% with η; and δ values between 0 and 1. With η and η varies between 0 and 2, TCP change was up to 2.4%. With η and δ variations from 0 to 3.5, maximum of 8.3% TCP difference is seen. Conclusion: When defined maximum and minimum volume varied more than 2%, significant TCP variations were seen. It is recommended less than 2% volume used in definition of Dmax or Dmin for target dosimetric evaluation criteria. This project was supported by NIH grants U10CA180868, U10CA180822, U24CA180803, U24CA12014 and PA CURE Grant.
Zhang, Liangcai; Yuan, Ying
2016-11-30
Drug combination therapy has become the mainstream approach to cancer treatment. One fundamental feature that makes combination trials different from single-agent trials is the existence of the maximum tolerated dose (MTD) contour, that is, multiple MTDs. As a result, unlike single-agent phase I trials, which aim to find a single MTD, it is often of interest to find the MTD contour for combination trials. We propose a new dose-finding design, the waterfall design, to find the MTD contour for drug combination trials. Taking the divide-and-conquer strategy, the waterfall design divides the task of finding the MTD contour into a sequence of one-dimensional dose-finding processes, known as subtrials. The subtrials are conducted sequentially in a certain order, such that the results of each subtrial will be used to inform the design of subsequent subtrials. Such information borrowing allows the waterfall design to explore the two-dimensional dose space efficiently using a limited sample size and decreases the chance of overdosing and underdosing patients. To accommodate the consideration that doses on the MTD contour may have very different efficacy or synergistic effects because of drug-drug interaction, we further extend our approach to a phase I/II design with the goal of finding the MTD with the highest efficacy. Simulation studies show that the waterfall design is safer and has higher probability of identifying the true MTD contour than some existing designs. The R package "BOIN" to implement the waterfall design is freely available from CRAN. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Neutron spectrum and dose-equivalent in shuttle flights during solar maximum
Keith, J.E.; Badhwar, G.D.; Lindstrom, D.J. (National Aeronautics and Space Administration, Houston, TX (United States). Lyndon B. Johnson Space Center)
1992-01-01
This paper presents unambiguous measurements of the spectrum of neutrons found in spacecraft during spaceflight. The neutron spectrum was measured from thermal energies to about 10 MeV using a completely passive system of metal foils as neutron detectors. These foils were exposed to the neutron flux bare, covered by thermal neutron absorbers (Gd) and inside moderators (Bonner spheres). This set of detectors was flown on three U.S. Space Shuttle flights, STS-28, STS-36 and STS-31, during the solar maximum. We show that the measurements of the radioactivity of these foils lead to a differential neutron energy spectrum in all three flights that can be represented by a power law, J(E){approx equal}E{sup -0.765} neutrons cm{sup -2} day {sup -1} MeV{sup -1}. We also show that the measurements are even better represented by a linear combination of the terrestrial neutron albedo and a spectrum of neutrons locally produced in a aluminium by protons, computed by a previous author. We use both approximations to the neutron spectrum to produce a worst case and most probable case for the neutron spectra and the resulting dose-equivalents, computed using ICRP-51 neutron fluence-dose conversion tables. We compare these to the skin dose-equivalents due to charged particles during the same flights. (author).
[Self-deliverance or infanticide?].
Rittner, Christian; Gehb, Iris; Püschel, Klaus
2008-01-01
Self-deliverance by a woman in labour is nowadays a very rare event. The authors report the case of a 24-year-old primipara and a newborn of 49 cm length and 2484 g body weight with a complex pattern of injuries on the head, neck, shoulder and back who had breathed for at least 15 to 30 minutes after birth and died from massive craniocerebral trauma and lesions in the oral and cervical region. As one of the experts considered it possible that the skull fractures were exclusively due to the self-deliverance, the woman was acquitted of the charge of manslaughter. This hypothesis is critically discussed on the basis of the presumable course of the delivery and the literature.
SU-E-T-597: Influence of Smoothing Parameters on Dynamic IMRT Plan Quality and Deliverability.
Manigandan, D; Sharma, S; Gandhi, A; Subramani, V; Sharma, D; Kumar, P; Julka, P; Rath, G
2012-06-01
To study the impact of different smoothing parameters on IMRT plan quality and deliverabilityMethods: Five previously treated patients of carcinoma cervix were chosen. Planning target volume (PTV) and organ at risk (OAR) i.e. bladder and rectum were contoured. In each case, five different dynamic IMRT plans with 6MV photon beam were created in eclipse TPS for Varian 2300C/D linear accelerator. During optimization, dose volume constraints and priorities were kept constant and smoothing parameters were varied as follows: 10/5, 40/30 (TPS default value), 80/60, 100/80 and 200/150 in x/y direction. Total dose was 5040cGy in 28 fractions and prescribed at 95% isodose. Plan quality was analyzed by means of coverage index (CI=PTV covered by prescription dose/PTV), OAR mean doses and total monitor units (MUs) required to deliver a plan. In each case, deliverability of treatment plans were verified with I'matriXX ion-chamber array and compared with TPS dose-plane using gamma index of 3% dose difference and 3mm distance to agreement criteria. The CI values were 0.9435±0.032, 0.9418±0.034, 0.9380±0.041, 0.9330±0.047 and 0.8681±0.072 for 10/5, 40/30, 80/60, 100/80 and 200/150 in x/y direction. PTV dose maximum decreases with the increase of smoothing parameters and values were 5724.38±106.08 5723.30±131.60, 5708.44±1 16.74, 5697.92±116.82 and 5587.50±189.50cGy. The bladder mean doses were 4027.46±630.40, 3821.62±420.62, 3819.58±427.08, 3813.42±435.02 and 3814.78±438.0cGy. Rectum mean doses were 3839.88±466.02, 3835.52±473.18, 3837.52±472.88, 3839.10±471.20 and 3918.94±469.76cGy. Similarly, Total MUs were 1588±205, 1573±214, 1513±274, 1456±335 and 1219±68. Gamma pass rate increases with the increase of smoothing parameters and values were 99.16±0.21%, 99.07±0.19%, 99.24±0.28%, 99.29±0.29% and 99.75±0.15%. When smoothing parameters decreased below TPS default value, plan quality increases, but deliverability decreases. If smoothing parameters
Benjamin T. Cooper, MD; Xiaochun Li, PhD; Samuel M. Shin, MD; Aram S. Modrek, BS; Howard C. Hsu, MD; J.K. DeWyngaert, PhD; Gabor Jozsef, PhD; Stella C. Lymberis, MD; Judith D. Goldberg, ScD; Silvia C. Formenti, MD
2016-01-01
Purpose: Maximum dose to the left anterior descending artery (LADmax) is an important physical constraint to reduce the risk of cardiovascular toxicity. We generated a simple algorithm to guide the positioning of the tangent fields to reliably maintain LADmax
Comparison of measured and estimated maximum skin doses during CT fluoroscopy lung biopsies
Zanca, F., E-mail: Federica.Zanca@med.kuleuven.be [Department of Radiology, Leuven University Center of Medical Physics in Radiology, UZ Leuven, Herestraat 49, 3000 Leuven, Belgium and Imaging and Pathology Department, UZ Leuven, Herestraat 49, Box 7003 3000 Leuven (Belgium); Jacobs, A. [Department of Radiology, Leuven University Center of Medical Physics in Radiology, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium); Crijns, W. [Department of Radiotherapy, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium); De Wever, W. [Imaging and Pathology Department, UZ Leuven, Herestraat 49, Box 7003 3000 Leuven, Belgium and Department of Radiology, UZ Leuven, Herestraat 49, 3000 Leuven (Belgium)
2014-07-15
Purpose: To measure patient-specific maximum skin dose (MSD) associated with CT fluoroscopy (CTF) lung biopsies and to compare measured MSD with the MSD estimated from phantom measurements, as well as with the CTDIvol of patient examinations. Methods: Data from 50 patients with lung lesions who underwent a CT fluoroscopy-guided biopsy were collected. The CT protocol consisted of a low-kilovoltage (80 kV) protocol used in combination with an algorithm for dose reduction to the radiology staff during the interventional procedure, HandCare (HC). MSD was assessed during each intervention using EBT2 gafchromic films positioned on patient skin. Lesion size, position, total fluoroscopy time, and patient-effective diameter were registered for each patient. Dose rates were also estimated at the surface of a normal-size anthropomorphic thorax phantom using a 10 cm pencil ionization chamber placed at every 30°, for a full rotation, with and without HC. Measured MSD was compared with MSD values estimated from the phantom measurements and with the cumulative CTDIvol of the procedure. Results: The median measured MSD was 141 mGy (range 38–410 mGy) while the median cumulative CTDIvol was 72 mGy (range 24–262 mGy). The ratio between the MSD estimated from phantom measurements and the measured MSD was 0.87 (range 0.12–4.1) on average. In 72% of cases the estimated MSD underestimated the measured MSD, while in 28% of the cases it overestimated it. The same trend was observed for the ratio of cumulative CTDIvol and measured MSD. No trend was observed as a function of patient size. Conclusions: On average, estimated MSD from dose rate measurements on phantom as well as from CTDIvol of patient examinations underestimates the measured value of MSD. This can be attributed to deviations of the patient's body habitus from the standard phantom size and to patient positioning in the gantry during the procedure.
[MAXIMUM SINGLE DOSE OF COLLOIDAL SILVER NEGATIVELY AFFECTS ERYTHROPOIESIS IN VITRO].
Tishevskayal, N V; Zakharovl, Y M; Bolotovl, A A; Arkhipenko, Yu V; Sazontova, T G
2015-01-01
Erythroblastic islets (EI) of rat bone marrow were cultured for 24 h in the presence of silver nanoparticles (1.07 · 10(-4) mg/ml; 1.07 · 10(-3) mg/ml; and 1.07 · 10(-2) mg/mL). The colloidal silver at 1.07 · 10(-3) mg/ml concentration inhibited the formation of new Elby disrupting contacts of bone marrow macrophages with CFU-E (erythropoiesis de novo) by 65.3% (p Colloidal silver nanoparticles suppressed the reconstruction of erythropoiesis and inhibited the formation of new EI by disrupting contacts of CFU-E and central macrophages with matured erythroidal "crown" (erythropoiesis de repeto). The colloidal silver concentration of 1.07 · 10(-3) mg/ml in the culture medium also reduced the number of self-reconstructing EI by 67.5% (p colloidal silver reduced this value by 93.7% (p Silver nanoparticles retarded maturation of erythroid cells at the stage of oxiphylic normoblast denucleation: 1.07 · 10(-3) mg/ml colloidal silver increased the number of mature El by 53% (p colloidal silver in concentration equivalent to the maximum single dose is related to the effect of silver nanoparticles rather than glycerol present in the colloidal suspension.
Final measurement. Deliverable D5.3
Holtzer, A.C.G.; Giessen, A.M. van der; Djurica, M.; Gruber, G.; Krengel, M.; Kokkinos, P.; Varvarigos, M.; Prusa, J.; Schulting, H.W.; Holzmann-Kaiser, U.; Schmoll, C.; Hatzakis, I.; Silva, F.M. da; Reymund, A.; Strebler, R.; Moreno, J.J.R.; Munoz, C.G.; Gheorghe, G.; Nikolopoulos, V.; Mavridis, T.; Bektas, O.; Yuce, E.; Volk, M.; Sterle, J.; Skarmeta, A.
2015-01-01
This deliverable D5.3 presents the GEN6 Final Measurement. It describes the outputs, outcomes and impact of the GEN6 project, based on other project deliverables inputs of the pilot leaders of the active GEN6 pilots and the individual consortium partners. The final measurement aims to show the amoun
QUEST2: Sysdtem architecture deliverable set
Braaten, F.D.
1995-02-27
This document contains the system architecture and related documents which were developed during the Preliminary Analysis/System Architecture phase of the Quality, Environmental, Safety T-racking System redesign (QUEST2) project. Each discreet document in this deliverable set applies to a analytic effort supporting the architectural model of QUEST2. The P+ methodology cites a list of P+ documents normally included in a ``typical`` system architecture. Some of these were deferred to the release development phase of the project. The documents included in this deliverable set represent the system architecture itself. Related to that architecture are some decision support documents which provided needed information for management reviews that occurred during April. Consequently, the deliverables in this set were logically grouped and provided to support customer requirements. The remaining System Architecture Phase deliverables will be provided as a ``Supporting Documents`` deliverable set for the first release.
Deng, Xingli; Yang, Zhiyong; Liu, Ruen; Yi, Meiying; Lei, Deqiang; Wang, Zhi; Zhao, Hongyang
2013-01-01
The safety of gamma knife radiosurgery should be considered when treating pituitary adenomas. To determine the maximum tolerated dose of radiation delivered by gamma knife radiosurgery to optic nerves. An animal model designed to establish prolonged balloon compression of the optic chiasm and parasellar region was developed to mimic the optic nerve compression caused by pituitary adenomas. Twenty cats underwent surgery to place a balloon for compression effect and 20 cats in a sham operation group received microsurgery without any treatment. The effects of gamma knife irradiation at 10-13 Gy on normal (sham operation group) and compressed (optic nerve compression group) optic nerves were investigated by pattern visual evoked potential examination and histopathology. Gamma knife radiosurgery at 10 Gy had almost no effect. At 11 Gy, P100 latency was significantly prolonged and P100 amplitude was significantly decreased in compressed optic nerves, but there was little change in the normal optic nerves. Doses of 11 Gy and higher induced significant electrophysiological variations and degeneration of the myelin sheath and axons in both normal and compressed optic nerves. Compressed optic nerves are more sensitive to gamma knife radiosurgery than normal optic nerves. The minimum dose of gamma knife radiosurgery that causes radiation injury in normal optic nerves is 12 Gy; however, the minimum dose is 11 Gy in compressed optic nerves. Copyright © 2013 S. Karger AG, Basel.
Plan, Elodie L; Maloney, Alan; Mentré, France; Karlsson, Mats O; Bertrand, Julie
2012-09-01
Estimation methods for nonlinear mixed-effects modelling have considerably improved over the last decades. Nowadays, several algorithms implemented in different software are used. The present study aimed at comparing their performance for dose-response models. Eight scenarios were considered using a sigmoid E(max) model, with varying sigmoidicity and residual error models. One hundred simulated datasets for each scenario were generated. One hundred individuals with observations at four doses constituted the rich design and at two doses, the sparse design. Nine parametric approaches for maximum likelihood estimation were studied: first-order conditional estimation (FOCE) in NONMEM and R, LAPLACE in NONMEM and SAS, adaptive Gaussian quadrature (AGQ) in SAS, and stochastic approximation expectation maximization (SAEM) in NONMEM and MONOLIX (both SAEM approaches with default and modified settings). All approaches started first from initial estimates set to the true values and second, using altered values. Results were examined through relative root mean squared error (RRMSE) of the estimates. With true initial conditions, full completion rate was obtained with all approaches except FOCE in R. Runtimes were shortest with FOCE and LAPLACE and longest with AGQ. Under the rich design, all approaches performed well except FOCE in R. When starting from altered initial conditions, AGQ, and then FOCE in NONMEM, LAPLACE in SAS, and SAEM in NONMEM and MONOLIX with tuned settings, consistently displayed lower RRMSE than the other approaches. For standard dose-response models analyzed through mixed-effects models, differences were identified in the performance of estimation methods available in current software, giving material to modellers to identify suitable approaches based on an accuracy-versus-runtime trade-off.
Fondevila, Damián; Arbiser, Silvio; Sansogne, Rosana; Brunetto, Mónica; Dosoretz, Bernardo
2008-05-01
Primary barrier determinations for the shielding of medical radiation therapy facilities are generally made assuming normal beam incidence on the barrier, since this is geometrically the most unfavorable condition for that shielding barrier whenever the occupation line is allowed to run along the barrier. However, when the occupation line (for example, the wall of an adjacent building) runs perpendicular to the barrier (especially roof barrier), then two opposing factors come in to play: increasing obliquity angle with respect to the barrier increases the attenuation, while the distance to the calculation point decreases, hence, increasing the dose. As a result, there exists an angle (alpha(max)) for which the equivalent dose results in a maximum, constituting the most unfavorable geometric condition for that shielding barrier. Based on the usual NCRP Report No. 151 model, this article presents a simple formula for obtaining alpha(max), which is a function of the thickness of the barrier (t(E)) and the equilibrium tenth-value layer (TVL(e)) of the shielding material for the nominal energy of the beam. It can be seen that alpha(max) increases for increasing TVL(e) (hence, beam energy) and decreases for increasing t(E), with a range of variation that goes from 13 to 40 deg for concrete barriers thicknesses in the range of 50-300 cm and most commercially available teletherapy machines. This parameter has not been calculated in the existing literature for radiotherapy facilities design and has practical applications, as in calculating the required unoccupied roof shielding for the protection of a nearby building located in the plane of the primary beam rotation.
QUEST2: Release 1: Project plan deliverable set
Braaten, F.D.
1995-02-10
This Project Management Plan combines the project management deliverables from the P+ methodology which are applicable to Release 1 of the QUEST2 work. This consolidation reflects discussions with WHC QA regarding an appropriate method for ensuring that P+ deliverables fulfill the intent of WHC-CM-3-10 and QR-19.
Soldat, J.K.; Price, K.R.; Rickard, W.H.
1990-10-01
The purpose of this report is to summarize the assumptions, dose factors, consumption rates, and methodology used to evaluate potential radiation doses to persons who may eat contaminated wildlife or contaminated plants collected from the Hanford Site. This report includes a description of the number and variety of wildlife and edible plants on the Hanford Site, methods for estimation of the quantities of these items consumed and conversion of intake of radionuclides to radiation doses, and example calculations of radiation doses from consumption of plants and wildlife. Edible plants on the publicly accessible margins of the shoreline of the Hanford Site and Wildlife that move offsite are potential sources of contaminated food for the general public. Calculations of potential radiation doses from consumption of agricultural plants and farm animal products are made routinely and reported annually for those produced offsite, using information about concentrations of radionuclides, consumption rates, and factors for converting radionuclide intake into dose. Dose calculations for onsite plants and wildlife are made intermittently when appropriate samples become available for analysis or when special studies are conducted. Consumption rates are inferred from the normal intake rates of similar food types raised offsite and from the edible weight of the onsite product that is actually available for harvest. 19 refs., 4 tabs.
EMPOWER. Deliverable no 7.1 Dissemination Plan
Meeuwissen, M.; Fioreze, T.; Thomas, T.; Pickerden, C.; Hof, T.
2015-01-01
This deliverable describes the general plan regarding all dissemination activities of the consortium partners and work packages during the project. This links directly to D7.4 where the realization of different dissemination activities will be reported.
EMPOWER. Deliverable no 7.1 Dissemination Plan
Meeuwissen, M.; Fioreze, T.; Thomas, T.; Pickerden, C.; Hof, T.
2015-01-01
This deliverable describes the general plan regarding all dissemination activities of the consortium partners and work packages during the project. This links directly to D7.4 where the realization of different dissemination activities will be reported.
Barbosa, Joana; Faria, Juliana; Leal, Sandra; Afonso, Luís Pedro; Lobo, João; Queirós, Odília; Moreira, Roxana; Carvalho, Félix; Dinis-Oliveira, Ricardo Jorge
2017-08-15
Tramadol and tapentadol are two atypical synthetic opioid analgesics, with monoamine reuptake inhibition properties. Mainly aimed at the treatment of moderate to severe pain, these drugs are extensively prescribed for multiple clinical applications. Along with the increase in their use, there has been an increment in their abuse, and consequently in the reported number of adverse reactions and intoxications. However, little is known about their mechanisms of toxicity. In this study, we have analyzed the in vivo toxicological effects in liver and kidney resulting from an acute exposure of a rodent animal model to both opioids. Male Wistar rats were intraperitoneally administered with 10, 25 and 50mg/kg tramadol and tapentadol, corresponding to a low, effective analgesic dose, an intermediate dose and the maximum recommended daily dose, respectively, for 24h. Toxicological effects were assessed in terms of oxidative stress, biochemical and metabolic parameters and histopathology, using serum and urine samples, liver and kidney homogenates and tissue specimens. The acute exposure to tapentadol caused a dose-dependent increase in protein oxidation in liver and kidney. Additionally, exposure to both opioids led to hepatic commitment, as shown by increased serum lipid levels, decreased urea concentration, increased alanine aminotransferase and decreased butyrylcholinesterase activities. It also led to renal impairment, as reflected by proteinuria and decreased glomerular filtration rate. Histopathological findings included sinusoidal dilatation, microsteatosis, vacuolization, cell infiltrates and cell degeneration, indicating metabolic changes, inflammation and cell damage. In conclusion, a single effective analgesic dose or the maximum recommended daily dose of both opioids leads to hepatotoxicity and nephrotoxicity, with tapentadol inducing comparatively more toxicity. Whether these effects reflect risks during the therapeutic use or human overdoses requires focused
Heshmat R
2008-04-01
Full Text Available Background and the purpose of the study: In many cases of diabetic foot ulcer (DFU management, wound healing is incomplete, and wound closure and epithelial junctional integrity are rarely achieved. Our aim was to evaluate the maximum tolerated dose (MTD and dose-limiting toxicity (DLT of Semelil (ANGIPARSTM, a new herbal compound for wound treatment in a Phase I clinical trial.Methods: In this open label study, six male diabetic patients with a mean age of 57±7.6 years were treated with escalating intravenous doses of Semelil, which started at 2 cc/day to 13.5 cc/day for 28 days. Patients were assessed with a full physical exam; variables which analyzed included age, past history of diabetes and its duration, blood pressure, body temperature, weight, characteristics of DFU, Na, K, liver function test, Complete Blood Count and Differential(CBC & diff, serum amylase, HbA1c, PT, PTT, proteinuria, hematuria, and side effects were recorded. All the measurements were taken at the beginning of treatment, the end of week 2 and week 4. We also evaluated Semelil's side effects at the end of weeks 4 and 8 after ending therapy.Results and major conclusions: Up to the drug dose of 10 cc/day foot ulcer dramatically improved. We did not observe any clinical or laboratory side effects at this or lower dose levels in diabetic patients. With daily dose of 13.5 cc of Semelil we observed phlebitis at the infusion site, which was the only side effect. Therefore, in this study we determined the MTD of Semelil at 10 cc/day, and the only DLT was phlebitis in injection vein. The recommended dose of Semelil I.V. administration for Phase II studies was 4 cc/day.
Laínez, José M; Orcun, Seza; Pekny, Joseph F; Reklaitis, Gintaras V; Suvannasankha, Attaya; Fausel, Christopher; Anaissie, Elias J; Blau, Gary E
2014-01-01
Variable metabolism, dose-dependent efficacy, and a narrow therapeutic target of cyclophosphamide (CY) suggest that dosing based on individual pharmacokinetics (PK) will improve efficacy and minimize toxicity. Real-time individualized CY dose adjustment was previously explored using a maximum a posteriori (MAP) approach based on a five serum-PK sampling in patients with hematologic malignancy undergoing stem cell transplantation. The MAP approach resulted in an improved toxicity profile without sacrificing efficacy. However, extensive PK sampling is costly and not generally applicable in the clinic. We hypothesize that the assumption-free Bayesian approach (AFBA) can reduce sampling requirements, while improving the accuracy of results. Retrospective analysis of previously published CY PK data from 20 patients undergoing stem cell transplantation. In that study, Bayesian estimation based on the MAP approach of individual PK parameters was accomplished to predict individualized day-2 doses of CY. Based on these data, we used the AFBA to select the optimal sampling schedule and compare the projected probability of achieving the therapeutic end points. By optimizing the sampling schedule with the AFBA, an effective individualized PK characterization can be obtained with only two blood draws at 4 and 16 hours after administration on day 1. The second-day doses selected with the AFBA were significantly different than the MAP approach and averaged 37% higher probability of attaining the therapeutic targets. The AFBA, based on cutting-edge statistical and mathematical tools, allows an accurate individualized dosing of CY, with simplified PK sampling. This highly accessible approach holds great promise for improving efficacy, reducing toxicities, and lowering treatment costs. © 2013 Pharmacotherapy Publications, Inc.
Deliverable 1.2.1 Market Analysis and Business Plan
Peterson, Carrie Beth
2009-01-01
Deliverable 1.2.1 - Market Analysis and Business Plan The main objective of this deliverable is to provide a short overview of 4 communities involved in the pilots, envisaged type of solutions and architectures to be deployed (chapter 2) and market analysis at regional level (chapter 3......) with related business cases. The Market analysis will provide an overview of market requirements, current status and opportunities for the pilot service that will be provided in the context of ISISEMD. This will be realised by performing detailed studies on various sources. Proposals for business modelling...
Aoki, Masahiko; Sato, Mariko; Hirose, Katsumi; Akimoto, Hiroyoshi; Kawaguchi, Hideo; Hatayama, Yoshiomi; Ono, Shuichi; Takai, Yoshihiro
2015-04-22
Radiation-induced rib fracture after stereotactic body radiotherapy (SBRT) for lung cancer has been recently reported. However, incidence of radiation-induced rib fracture after SBRT using moderate fraction sizes with a long-term follow-up time are not clarified. We examined incidence and risk factors of radiation-induced rib fracture after SBRT using moderate fraction sizes for the patients with peripherally located lung tumor. During 2003-2008, 41 patients with 42 lung tumors were treated with SBRT to 54-56 Gy in 9-7 fractions. The endpoint in the study was radiation-induced rib fracture detected by CT scan after the treatment. All ribs where the irradiated doses were more than 80% of prescribed dose were selected and contoured to build the dose-volume histograms (DVHs). Comparisons of the several factors obtained from the DVHs and the probabilities of rib fracture calculated by Kaplan-Meier method were performed in the study. Median follow-up time was 68 months. Among 75 contoured ribs, 23 rib fractures were observed in 34% of the patients during 16-48 months after SBRT, however, no patients complained of chest wall pain. The 4-year probabilities of rib fracture for maximum dose of ribs (Dmax) more than and less than 54 Gy were 47.7% and 12.9% (p = 0.0184), and for fraction size of 6, 7 and 8 Gy were 19.5%, 31.2% and 55.7% (p = 0.0458), respectively. Other factors, such as D2cc, mean dose of ribs, V10-55, age, sex, and planning target volume were not significantly different. The doses and fractionations used in this study resulted in no clinically significant rib fractures for this population, but that higher Dmax and dose per fraction treatments resulted in an increase in asymptomatic grade 1 rib fractures.
Koshiishi, H.; Kimoto, Y.; Matsumoto, H.; Goka, T.
The Tsubasa satellite developed by the Japan Aerospace Exploration Agency was launched in Feb 2002 into Geo-stationary Transfer Orbit GTO Perigee 500km Apogee 36000km and had been operated well until Sep 2003 The objective of this satellite was to verify the function of commercial parts and new technologies of bus-system components in space Thus the on-board experiments were conducted in the more severe radiation environment of GTO rather than in Geo-stationary Earth Orbit GEO or Low Earth Orbit LEO The Space Environment Data Acquisition equipment SEDA on board the Tsubasa satellite had the Single-event Upset Monitor SUM and the DOSimeter DOS to evaluate influences on electronic devices caused by radiation environment that was also measured by the particle detectors of the SEDA the Standard DOse Monitor SDOM for measurements of light particles and the Heavy Ion Telescope HIT for measurements of heavy ions The SUM monitored single-event upsets and single-event latch-ups occurred in the test sample of two 64-Mbit DRAMs The DOS measured accumulated radiation dose at fifty-six locations in the body of the Tsubasa satellite Using the data obtained by these instruments single-event and total-dose effects in GTO during solar-activity maximum period especially their rapid changes due to solar flares and CMEs in the region from L 1 1 through L 11 is discussed in this paper
Fei, Tan; Yang, Lian-juan; Mo, Xiao-hui; Wang, Xiu-li; Jun, Gu
2014-11-01
Low-dose metronomic (LDM) chemotherapy with cytotoxic agents, aimed at disrupting tumor endothelial cells, is an alternative method to maximum tolerated dose chemotherapy targeting proliferating tumor cells in clinical practice. However, even in the LDM schedule, cytotoxic agents still exhibit serious side effects due to non-distribution and high accumulated doses in the body. Nanocarriers can maximize the efficacy of the encapsulated drug by adjusting the pharmacokinetics and bio-distribution pattern, and minimize excessive toxic side effects. In the present study, we prepared polyethylene glycol (PEG)-coated stealth nanoparticles containing paclitaxel (PTX-NP) in order to evaluate their accumulation in tumor and their anti-tumor activity following LDM administration. PTX-NPs were prepared by a modified emulsification/solvent diffusion method with methoxy PEG-poly(lactide). The in vitro viability, migration, and tube formation of primary human umbilical vein endothelial cells, in addition to thrombospondin-1 positive expression and microvessel density in vivo, confirmed the anti-angiogenic activity of PTX-NP. The cellular uptake and retention study, in addition to pharmacokinetics in Sprague-Dawley rats demonstrated sustained circulation of PTX-NP. The in vivo tumor accumulation of PTX-NP was monitored using the Xenogen IVIS 200 non-invasive optical imaging system. The anti-tumor activity of LDM PTX-NP was studied in B16 melanoma cancer-bearing mice in vivo. In conclusion, PTX-NP improved tumor accumulation and anti-tumor efficacy following LDM administration.
王雪丽; 陶剑; 史宁中
2005-01-01
The primary goal of a phase I clinical trial is to find the maximum tolerable dose of a treatment. In this paper, we propose a new stepwise method based on confidence bound and information incorporation to determine the maximum tolerable dose among given dose levels. On the one hand, in order to avoid severe even fatal toxicity to occur and reduce the experimental subjects, the new method is executed from the lowest dose level, and then goes on in a stepwise fashion. On the other hand,in order to improve the accuracy of the recommendation, the final recommendation of the maximum tolerable dose is accomplished through the information incorporation of an additional experimental cohort at the same dose level. Furthermore, empirical simulation results show that the new method has some real advantages in comparison with the modified continual reassessment method.
Li, Ruochen; Englehardt, James D; Li, Xiaoguang
2012-02-01
Multivariate probability distributions, such as may be used for mixture dose-response assessment, are typically highly parameterized and difficult to fit to available data. However, such distributions may be useful in analyzing the large electronic data sets becoming available, such as dose-response biomarker and genetic information. In this article, a new two-stage computational approach is introduced for estimating multivariate distributions and addressing parameter uncertainty. The proposed first stage comprises a gradient Markov chain Monte Carlo (GMCMC) technique to find Bayesian posterior mode estimates (PMEs) of parameters, equivalent to maximum likelihood estimates (MLEs) in the absence of subjective information. In the second stage, these estimates are used to initialize a Markov chain Monte Carlo (MCMC) simulation, replacing the conventional burn-in period to allow convergent simulation of the full joint Bayesian posterior distribution and the corresponding unconditional multivariate distribution (not conditional on uncertain parameter values). When the distribution of parameter uncertainty is such a Bayesian posterior, the unconditional distribution is termed predictive. The method is demonstrated by finding conditional and unconditional versions of the recently proposed emergent dose-response function (DRF). Results are shown for the five-parameter common-mode and seven-parameter dissimilar-mode models, based on published data for eight benzene-toluene dose pairs. The common mode conditional DRF is obtained with a 21-fold reduction in data requirement versus MCMC. Example common-mode unconditional DRFs are then found using synthetic data, showing a 71% reduction in required data. The approach is further demonstrated for a PCB 126-PCB 153 mixture. Applicability is analyzed and discussed. Matlab(®) computer programs are provided.
Check-up Measurement (update). Deliverable D5.22
Holtzer, A.C.G.; Giessen, A.M.D. van der; Djurica, M.
2015-01-01
This deliverable D5.22 presents the GEN6 check-up measurement. It describes the most prominent outcomes of the GEN6 project up to this point in time. The check-up measurement helps to focus the monitoring towards the most relevant achievements of the project, such that an efficient and well-targeted
SU-E-T-492: Influence of Clipping PTV in Build-Up Region On IMRT Plan Quality and Deliverability
Sharma, S; Manigandan, D; Sahai, P; Biswas, A; Subramani, V; Chander, S; Julkha, P; Rath, G [Fortis Hospital, Mohali, Punjab (India)
2015-06-15
Purpose: To study the influence of clipping PTV from body contour on plan quality and deliverability in build-up region for superficial target. Methods: Five previously treated patients of post-operative carcinoma of parotid were re-planned for IMRT (6MV X-rays, sliding window technique, five fields and 60Gy/30 fractions) using eclipse treatment planning system (TPS) by keeping dose volume constraints and all other parameters constant, only PTV was clipped from body contour by 0mm, 1mm, 2mm and 3mm respectively. Planned fluence was transferred to previously scanned solid water phantom by placing I’matriXX array at 0.5cm depth (2mm slab+3mm inherent). Fluence was delivered by Varian CL2300C/D linac at 99.5cm source to detector distance. Measured fluence was compared with TPS dose plane using 2D gamma evaluation using 3%/3mm DTA criteria. Total MU (monitor unit) required to deliver a plan was also noted. For plan quality, PTV, maximum-dose, minimum-dose, coverage index (CI=PTV volume covered by prescription dose/PTV) and heterogeneity index HI=D5/D95 were analyzed using dose volume histogram (DVH). Results: The Result of gamma function analysis for I’matriXX and TPS were 97.63±1.79%, 97.48±0.99, 98.08±0.89% and 98.01±0.78% at 0.5cm build-up depth for 0, 1, 2 and 3mm PTV clipping, respectively. I’matriXX measured dose was higher compared to TPS. Total MU required for delivering a plan were 552±61, 503±47, 436±24 and 407±22. Maximum-dose to PTV was 6635.80±62.01cGy, 6635.80±40.60cGy, 6608.43±51.07cGy and 6564.20±28.51cGy. Similarly, minimum-dose to PTV was 3306.23±458.56cGy, 3546.57±721.01cGy, 4591.43±298.81cGy and 4861.90±412.40cGy. CI was 0.9347±0.020, 0.9398±0.021, 0.9448±0.022 and 0.9481±0.021. Similarly, HI was 1.089±0.015, 1.084±0.014, 1.078±0.009 and 1.074±0.008 for 0, 1, 2 and 3mm PTV clipping, respectively. Conclusion: Gamma function analysis resulted in almost similar results. However, I’matriXX was overestimating the dose
Francisco Maximino Fernandes
2010-04-01
Full Text Available The experiment was carried in protected (greenhouse atmosphere, in University of Engineering, UNESP of Ilha Solteira-SP, with the objective of evaluating sources (limestone and calcium silicate slag and doses (0,0 – 0,5 – 1,0 – 1,5 – 2,0 times the recommended dose of corrective in the bromatologic composition, tillering and production of dry matter of the grass mombaça (Panicum maximum Jacq.. The lineation was completely randomized design, with four repetitions. It was evaluated the tiller number, the production of dry matter, the gross protein, neutral detergent fiber (NDF and acid detergent fiber (ADF. The corrective influenced the tillering in almost all of the countings. The limestone provided larger production of dry matter in the doses of 1,5 and 2,0 times the recommended dose. The bromatologic composition of the forage was not influenced by the corrective and doses.O experimento foi conduzido em ambiente protegido (estufa, na Faculdade de Engenharia, UNESP de Ilha Solteira, com o objetivo de avaliar fontes (calcário e escória silicatada e doses (0,0 – 0,5 – 1,0 – 1,5 – 2,0 vezes a dose recomendada de corretivos na composição bromatológica, perfilhamento e produção de matéria seca do capim-mombaça (Panicum maximum Jacq.. O delineamento experimental utilizado foi inteiramente casualizado, com quatro repetições. Avaliou-se o número de perfilhos, a produção de matéria seca e os teores de proteína bruta (PB, fibra em detergente neutro (FDN e fibra em detergente ácido (FDA. Os corretivos influenciaram o perfilhamento em quase todas as contagens. O calcário proporcionou maior produção de matéria seca nas doses de 1,5 e 2,0 vezes a dose recomendada. A composição bromatológica da forragem não foi influenciada pelos corretivos e doses utilizadas.
Nakata, Manabu; Okada, Takashi; Komai, Yoshinori; Nohara, Hiroki [Kyoto Univ. (Japan). Hospital
1996-08-01
Modern linear accelerators have four independent jaws and multileaf collimators (MLC) of 1 cm width at the isocenter. Asymmetric fields defined by such independent jaws and irregular multileaf collimated fields can be used to match adjacent fields or to spare the spinal cord in external photon beam radiotherapy. We have developed a new approximate algorithm for depth dose calculations at the collimator rotation axis. The program is based on Clarkson`s principle, and uses a more accurate modification of Day`s method for asymmetric fields. Using this method, tissue-maximum ratios (TMR) and field factors of ten kinds of asymmetric fields and ten different irregular multileaf collimated fields were calculated and compared with the measured data for 6 MV and 15 MV photon beams. The dose accuracy with the general A/Pe method was about 3%, however, with the new modified Day`s method, accuracy was within 1.7% for TMR and 1.2% for field factors. The calculated TMR and field factors were found to be in good agreement with measurements for both the 6 MV and 15 MV photon beams. (author)
Francescon, Paolo; Beddar, Sam; Satariano, Ninfa; Das, Indra J.
2014-01-01
Purpose: Evaluate the ability of different dosimeters to correctly measure the dosimetric parameters percentage depth dose (PDD), tissue-maximum ratio (TMR), and off-axis ratio (OAR) in water for small fields. Methods: Monte Carlo (MC) simulations were used to estimate the variation of kQclin,Qmsrfclin,fmsr for several types of microdetectors as a function of depth and distance from the central axis for PDD, TMR, and OAR measurements. The variation of kQclin,Qmsrfclin,fmsr enables one to evaluate the ability of a detector to reproduce the PDD, TMR, and OAR in water and consequently determine whether it is necessary to apply correction factors. The correctness of the simulations was verified by assessing the ratios between the PDDs and OARs of 5- and 25-mm circular collimators used with a linear accelerator measured with two different types of dosimeters (the PTW 60012 diode and PTW PinPoint 31014 microchamber) and the PDDs and the OARs measured with the Exradin W1 plastic scintillator detector (PSD) and comparing those ratios with the corresponding ratios predicted by the MC simulations. Results: MC simulations reproduced results with acceptable accuracy compared to the experimental results; therefore, MC simulations can be used to successfully predict the behavior of different dosimeters in small fields. The Exradin W1 PSD was the only dosimeter that reproduced the PDDs, TMRs, and OARs in water with high accuracy. With the exception of the EDGE diode, the stereotactic diodes reproduced the PDDs and the TMRs in water with a systematic error of less than 2% at depths of up to 25 cm; however, they produced OAR values that were significantly different from those in water, especially in the tail region (lower than 20% in some cases). The microchambers could be used for PDD measurements for fields greater than those produced using a 10-mm collimator. However, with the detector stem parallel to the beam axis, the microchambers could be used for TMR measurements for all
Deliverable 1.2.1 Market Analysis and Business Plan
Peterson, Carrie Beth
2009-01-01
Deliverable 1.2.1 - Market Analysis and Business Plan The main objective of this deliverable is to provide a short overview of 4 communities involved in the pilots, envisaged type of solutions and architectures to be deployed (chapter 2) and market analysis at regional level (chapter 3......) with related business cases. The Market analysis will provide an overview of market requirements, current status and opportunities for the pilot service that will be provided in the context of ISISEMD. This will be realised by performing detailed studies on various sources. Proposals for business modelling...... and business cases (chapter 4) will rely on the concept of value chains. Value chains typically consist of several providers, which together produce a complex product or service....
Women who win with words: Deliverance via persuasive communication
R.G. Branch
2003-08-01
Full Text Available The Wise Woman of Abel Beth Maacah quells a rebellion (2 Sam. 20. Abigail, a beautiful and intelligent woman, rescues her household (1 Sam. 25. And the older sister of Moses, by tradition Miriam, saves her baby brother’s life (Ex. 2. These two women and a girl represent political saviours who facilitate the deliverance of a city, community, and an individual via persuasive words. As winners with words, these orators contribute dynamically to the biblical text by providing an alternative way of deliverance, one enabling it to come through a means other than the sword. Via perceptive persuasion, they guide those with whom they interact toward choosing life and the common good. This article takes a cross-disciplinary approach to the biblical text by looking at the persuasive communication techniques these two women and a girl employ so successfully.
Skill Gap Analysis for Improved Skills and Quality Deliverables
Mallikarjun Koripadu
2014-10-01
Full Text Available With a growing pressure in identifying the skilled resources in Clinical Data Management (CDM world of clinical research organizations, to provide the quality deliverables most of the CDM organizations are planning to improve the skills within the organization. In changing CDM landscape the ability to build, manage and leverage the skills of clinical data managers is very critical and important. Within CDM to proactively identify, analyze and address skill gaps for all the roles involved. In addition to domain skills, the evolving role of a clinical data manager demands diverse skill sets such as project management, six sigma, analytical, decision making, communication etc. This article proposes a methodology of skill gap analysis (SGA management as one of the potential solutions to the big skill challenge that CDM is gearing up for bridging the gap of skills. This would in turn strength the CDM capability, scalability, consistency across geographies along with improved productivity and quality of deliverables
UMBRELLA Deliverable D 1.3 "Final Report"
Becker, Raik; Benavent Rodríguez, Belén; Bootsman, Rob; Eickmann, Jonas; Engl, Wulf Albrecht; Gilsdorf, Peter; Jansen, Michel; Krahl, Simon; Krause, Thilo; Moormann, Andreas; Morales-España, Germán; Paeschke, Helmut; Ramirez Elizondo, Laura; Roald, Line; Rogge, Michael
2015-01-01
This deliverable summarizes the content of the FP7 project UMBRELLA. It describes the objectives and the results achieved by the end of the four project years. UMBRELLA is also cooperating with the related FP7 project iTesla, i.e. some public workshops are performed commonly, test environments are harmonized and a common proposal on potential development of future TSO rules is proposed.
Summary of ST-MA deliverables for LHC
Cennini, E; Ninin, P; Nunes, R; Scibile, L; CERN. Geneva. ST Division
2003-01-01
The ST/MA group is responsible for the monitoring of the CERN Technical Infrastructure as well as the design, installation and maintenance of personnel protection system such as access control system, fire and gas leak detection, safety alarm monitoring systems and radiation monitoring systems (in collaboration with TIS). This paper provides an overview of the main projects and services managed in the group and outlines the scope, the organisation and the planning of the main deliverables for LHC.
Gas Deliverability Model with Different Vertical Wells Properties
L. Mucharam
2003-11-01
Full Text Available We present here a gas deliverability computational model for single reservoir with multi wells. The questions of how long the gas delivery can be sustained and how to estimate the plateau time are discussed here. In order to answer such a question, in this case, a coupling method which consists of material balance method and gas flow equation method is developed by assuming no water influx in the reservoir. Given the rate and the minimum pressure of gas at the processing plant, the gas pressure at the wellhead and at the bottom hole can be obtained. From here, the estimation of the gas deliverability can be done. In this paper we obtain a computational method which gives direct computation for pressure drop from the processing plant to the wells, taking into account different well behavior. Here AOF technique is used for obtaining gas rate in each well. Further Tian & Adewumi correlation is applied for pressure drop model along vertical and horizontal pipes and Runge-Kutta method is chosen to compute the well head and bottom hole pressures in each well which then being used to estimate the plateau times. We obtain here direct computational scheme of gas deliverability from reservoir to processing plant for single reservoir with multi-wells properties. Computational results give different profiles (i.e. gas rate, plateau and production time, etc for each well. Further by selecting proper flow rate reduction, the flow distribution after plateau time to sustain the delivery is computed for each well.
Vives, Marta; Ginestà, Mireia M; Gracova, Kristina; Graupera, Mariona; Casanovas, Oriol; Capellà, Gabriel; Serrano, Teresa; Laquente, Berta; Viñals, Francesc
2013-11-15
In this article, the effectiveness of a multi-targeted chemo-switch (C-S) schedule that combines metronomic chemotherapy (MET) after treatment with the maximum tolerated dose (MTD) is reported. This schedule was tested with gemcitabine in two distinct human pancreatic adenocarcinoma orthotopic models and with cyclophosphamide in an orthotopic ovarian cancer model. In both models, the C-S schedule had the most favourable effect, achieving at least 80% tumour growth inhibition without increased toxicity. Moreover, in the pancreatic cancer model, although peritoneal metastases were observed in control and MTD groups, no dissemination was observed in the MET and C-S groups. C-S treatment caused a decrease in angiogenesis, and its effect on tumour growth was similar to that produced by the MTD followed by anti-angiogenic DC101 treatment. C-S treatment combined an increase in thrombospondin-1 expression with a decrease in the number of CD133+ cancer cells and triple-positive CD133+/CD44+/CD24+ cancer stem cells (CSCs). These findings confirm that the C-S schedule is a challenging clinical strategy with demonstrable inhibitory effects on tumour dissemination, angiogenesis and CSCs.
Deliverable D2.1 - State of the Art in Tools for Creativity
Grube, Per-Pascal; Schmid, Klaus; Dolog, Peter;
2008-01-01
This deliverable provides a survey of creativity techniques that have been developed over time. It provides a reference model for characterizing them as a basis for selecting techniques that are particularly appropriate for the idSpace-project. The deliverable also surveys existing creativity tools...... and characterizes them in terms of underlying techniques. Finally, the deliverable describes requirements for the idSpace-environment and discusses to what extend it can benefit from existing tools....
Load Composition Model Workflow (BPA TIP-371 Deliverable 1A)
Chassin, David P.; Cezar, Gustavo V.; /SLAC
2017-07-17
This project is funded under Bonneville Power Administration (BPA) Strategic Partnership Project (SPP) 17-005 between BPA and SLAC National Accelerator Laboratory. The project in a BPA Technology Improvement Project (TIP) that builds on and validates the Composite Load Model developed by the Western Electric Coordinating Council's (WECC) Load Modeling Task Force (LMTF). The composite load model is used by the WECC Modeling and Validation Work Group to study the stability and security of the western electricity interconnection. The work includes development of load composition data sets, collection of load disturbance data, and model development and validation. This work supports reliable and economic operation of the power system. This report was produced for Deliverable 1A of the BPA TIP-371 Project entitled \\TIP 371: Advancing the Load Composition Model". The deliverable documents the proposed work ow for the Composite Load Model, which provides the basis for the instrumentation, data acquisition, analysis and data dissemination activities addressed by later phases of the project.
Deliverable D3 - Low- and Medium-beta linac
A. Facco, A. Balabin, R. Paparella, D. Zenere, INFN-Laboratori Nazionali di Legnaro, Padova, Italy; D. Berkovits, J. Rodnizki, SOREQ, Yavne, Israel; J. L. Biarrotte, S. Bousson, A. Ponton, G. Olry, IPN Orsay, France; R. Duperrier, D. Uriot, CEA/Saclay, France; V. Zvyagintsev, TRIUMF, Vancouver, Canada.
The present document describes the Low- and Medium-beta section of the EURISOL DS Driver Accelerator. This section consists of a superconducting linac, based on Half-Wave (HWR) and SPOKE type resonators, preceded by a short, normal-conducting MEBT (Medium Energy Beam Transport) section that performs input beam matching. The scope of this linac is to bring the beams of H-, D+ and 3He++ produced by the Ion Injector (Deliverable D-5) to the energy and beam parameters required for injection in the superconducting High-beta linac (Deliverable D4-High beta linac). The present beam dynamics design reaches the goal of accelerating the required high current beams to the design energy (about 100 MeV/A, depending on the ion species), with minimum emittance growth and with low losses, using realistic and cost-effective, although innovative, technological solutions. The Low- and Medium-beta linac layout is described, together with the fundamental parameters and characteristics of its components and the system performanc...
Deliverable D3 - Low- and Medium-beta linac
A. Facco, A. Balabin, R. Paparella, D. Zenere, INFN-Laboratori Nazionali di Legnaro, Padova, Italy; D. Berkovits, J. Rodnizki, SOREQ, Yavne, Israel; J. L. Biarrotte, S. Bousson, A. Ponton, G. Olry, IPN Orsay, France; R. Duperrier, D. Uriot, CEA/Saclay, France; V. Zvyagintsev, TRIUMF, Vancouver, Canada.
The present document describes the Low- and Medium-beta section of the EURISOL DS Driver Accelerator. This section consists of a superconducting linac, based on Half-Wave (HWR) and SPOKE type resonators, preceded by a short, normal-conducting MEBT (Medium Energy Beam Transport) section that performs input beam matching. The scope of this linac is to bring the beams of H-, D+ and 3He++ produced by the Ion Injector (Deliverable D-5) to the energy and beam parameters required for injection in the superconducting High-beta linac (Deliverable D4-High beta linac). The present beam dynamics design reaches the goal of accelerating the required high current beams to the design energy (about 100 MeV/A, depending on the ion species), with minimum emittance growth and with low losses, using realistic and cost-effective, although innovative, technological solutions. The Low- and Medium-beta linac layout is described, together with the fundamental parameters and characteristics of its components and the system performance.
Project deliverables - a waste of time or a chance for knowledge transfer and dissemination?
Walter, Sylvia
2016-04-01
Deliverables are a common tool to measure a distinct output of a project. They should be meaningful in terms of the project's objectives and are normally constituted by e.g. a written report or document, a developed tool or software, an organized training or conference. They can be scientific or technical. The number of deliverables must be reasonable and commensurate to the project and its content. Deliverables as contractual obligations are often time consuming and often seen as a waste of "research" time, as one more administrative task without any use. However, deliverables are needed to verify the progress of a project and to convince the sponsor that the project is going in the right direction and the money well-invested. The presentation will deal with the question on how to use a deliverable in a profitable way for the project and what are the possibilities of use.
Preliminary Collider Baseline Parameters: Deliverable D1.1
AUTHOR|(SzGeCERN)430609
2015-01-01
This deliverable provides a preliminary specification of the layout and target operation parameters for the FCC-hh hadron collider concept. They serve as starting point for the studies in all work packages. The goal of the FCC hadron collider is to provide proton-proton collisions at a centre-of-mass energy of 100 TeV. The machine is compatible with ion beam operation. Assuming a nominal dipole field of 16 T, such a machine is based on a perimeter of 100 km. The machine is designed to accommodate two main proton experiments that are operated simultaneously. The machine delivers a peak luminosity of 5-30 x 1034 cm-2s-1. The layout allows for two additional special-purpose experiments.
Deliverability on the interstate natural gas pipeline system
NONE
1998-05-01
Deliverability on the Interstate Natural Gas Pipeline System examines the capability of the national pipeline grid to transport natural gas to various US markets. The report quantifies the capacity levels and utilization rates of major interstate pipeline companies in 1996 and the changes since 1990, as well as changes in markets and end-use consumption patterns. It also discusses the effects of proposed capacity expansions on capacity levels. The report consists of five chapters, several appendices, and a glossary. Chapter 1 discusses some of the operational and regulatory features of the US interstate pipeline system and how they affect overall system design, system utilization, and capacity expansions. Chapter 2 looks at how the exploration, development, and production of natural gas within North America is linked to the national pipeline grid. Chapter 3 examines the capability of the interstate natural gas pipeline network to link production areas to market areas, on the basis of capacity and usage levels along 10 corridors. The chapter also examines capacity expansions that have occurred since 1990 along each corridor and the potential impact of proposed new capacity. Chapter 4 discusses the last step in the transportation chain, that is, deliverability to the ultimate end user. Flow patterns into and out of each market region are discussed, as well as the movement of natural gas between States in each region. Chapter 5 examines how shippers reserve interstate pipeline capacity in the current transportation marketplace and how pipeline companies are handling the secondary market for short-term unused capacity. Four appendices provide supporting data and additional detail on the methodology used to estimate capacity. 32 figs., 15 tabs.
Weir, Matthew R; Hollenberg, Norman K; Zappe, Dion H;
2010-01-01
The blood pressure (BP)-lowering response to renin-angiotensin-aldosterone system blockade in hypertensive African-Americans is typically less than in whites. To determine whether higher than conventional doses of renin-angiotensin-aldosterone system blockade can improve BP reduction in African...
Weir, Matthew R; Hollenberg, Norman K; Zappe, Dion H;
2010-01-01
The blood pressure (BP)-lowering response to renin-angiotensin-aldosterone system blockade in hypertensive African-Americans is typically less than in whites. To determine whether higher than conventional doses of renin-angiotensin-aldosterone system blockade can improve BP reduction in African-American...
Kareva, Irina; Waxman, David J; Lakka Klement, Giannoula
2015-03-28
The administration of chemotherapy at reduced doses given at regular, frequent time intervals, termed 'metronomic' chemotherapy, presents an alternative to standard maximal tolerated dose (MTD) chemotherapy. The primary target of metronomic chemotherapy was originally identified as endothelial cells supporting the tumor vasculature, and not the tumor cells themselves, consistent with the emerging concept of cancer as a systemic disease involving both tumor cells and their microenvironment. While anti-angiogenesis is an important mechanism of action of metronomic chemotherapy, other mechanisms, including activation of anti-tumor immunity and a decrease in acquired therapeutic resistance, have also been identified. Here we present evidence supporting a mechanistic explanation for the improved activity of cancer chemotherapy when administered on a metronomic, rather than an MTD schedule and discuss the implications of these findings for further translation into the clinic.
Possibilities for a valorisation of geomorphologic research deliverables
Geilhausen, M.; Götz, J.; Otto, J.-C.; Schrott, L.
2009-04-01
Many geomorphological studies focus on fundamental research questions in large parts, although there are lots of applied fields like landslide hazard assessment or water framework directive. As fundamental research is a common property, their outcomes should be more "open" and accessible to the public. This means that scientists have to find new ways presenting their results and outcomes besides publishing in scientific journals. This paper shows possibilities for a valorisation of geomorphologic research deliverables using print as well as digital media. Geotrails explain remarkable and exciting landscape features using information boards and become more and more popular and important for tourism in many parts of the world. With the growing interest in environmental change and outdoor activities, print media like field guides reach an increasing number of people. Field guides and Geotrails can be coupled in order to arise awareness about geomorphological landforms and to deliver more specific information on the site beyond the information given on the boards in the field. As field guides are designed for the general public they can be used for educational purposes as well. Today, this information can also be found in the internet offering virtual trips through landscapes using dynamic maps. Here, server side GIS technologies (WebGIS) using standardised interfaces provide new possibilities to show geomorphic data to the public and to share them with the scientific community. Furthermore, data formats like XML or KML are powerful tools for data exchange and can be used in interactive data viewers like Google Earth. We will present the Geotrail "Geomorphologischer Lehrpfad am Fuße der Zugspitze. Das Reintal - Eine Wanderung durch Raum und Zeit" (Bavarian Alps, Germany). Additionally, three geomorphologic WebGIS applications (Geomorphologic map Turtmanntal, Permafrostmap of Austria, Geomorphologic maps of Germany) will exemplify how geomorphologic information and
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Tølle, Martin; Zwegers, Arian; Vesterager, Johan
2003-01-01
. The emphasis in this deliverable is to define an architectural framework, which will be a body of knowledge that supports future practical work in the area of global engineering and manufacturing in enterprise networks, supported by methodologies/guidelines. This document is organised as follows: - A first......This document, D412/D43 Virtual Enterprise Reference Architecture and Methodology (VERAM), is a result of the merging of the two main deliverables of work package 4 of the IMS GLOBEMEN project: D41 Reference Architecture (deliverable D412 for the European GLOBEMEN project) and D43 VME Guidelines....... IMS Globemen is an inter-regional project aiming to develop methods, tools and architectures to support inter-enterprise operations in one-of-kind industries, in different lifecycle phases. This deliverable describes an architectural framework VERAM including a description/elaboration of its elements...
FY2004 Progress Summary and FY2005 Program Plan Statement of Work and Deliverables
Meier, W; Bibeau, C
2006-01-23
FY2004 progress summary and FY2005 program plan statement of work and deliverables for development of high average power diode-pumped solid state lasers, and complementary technologies for applications in energy and defense.
STUDY OF LYSINE AND ALANINE DELIVERANCE THROUGH POLYPYRROLE MEMBRANE
Adhitasari Suratman
2010-06-01
Full Text Available Electropolymerization processes of pyrrole and the usage of polypyrrole membrane as lysine and alanine deliverance have been studied by cyclic voltammetry technique. Polypyrrole membrane was prepared by electropolymerization processes of pyrrole in water based solvent containing sodium perchlorate as supporting electrolyte. Electropolymerization processes were carried out within potential range of 0-1100 mV vs Ag/AgCl reference electrode and at the scanning rate of 100 mV/s. In this study, lysine and alanine have been used as molecules which could easily be loaded on and released from polypyrrole membrane. The presence of lysine or alanine during electropolymerization process reduced the rate of electropolymerization of polypyrrole. In lysine or alanine transfer processes into polypyrrole membrane, the interaction between polypyrrole and lysine or alanine showed by the curve of E½ oxidation in respect of - log C. It proved that the E½ oxidation shifted to more positive potential showed by the increasing of concentration of lysine or alanine. Beside that, voltammetric responses of lysine and alanine transfered into polypyrrole membrane were found to be Nernstian. The results indicated that polypyrrole could be used as a sensor of lysine and alanine. Keywords: Electropolymerization, polypyrrole membrane, voltammetry technique
QUEST2: Release 1, SA/Release 1 supporting documents deliverable set
Braaten, F.D.
1995-02-27
This document contains deliverables which reflect the last of the System Architecture phase analysis for the Quality, Environmental, Safety Tracking System redesign (QUEST2) project. These deliverables are focused on the final insights required to start functional design of the first QUEST2 release. They include the data definitions, conversion rules, standards for design and user interface, performance criteria, and rules to be followed during the prototyping activity described in the Project Management Plan.
Deliverable D2.3 - System Architecture and Technical Specifications: MITIGATE
Polemi, Nineta; Papastergiou, Spyros; Karantjias, Athanassios; Drosos, Nikos; Gouvas, Panagiotis; Duzha, Armend; Bianchi, Manuel; Campana, Flavio; Erler, Timo; Schauer, Stefan; Mouratidis, Haris; Pavlidis, Michalis; Rekleitis, Evangelos
2016-01-01
This deliverable reflects the outcomes of task T2.5 and provides details regarding the high level architecture of the MITIGATE risk management system. More specifically, this deliverable provides information regarding the various components that comprise the high level architecture and elaborates on the interdependencies between them. Per each component a specific analysis is performed regarding the offered functionalities and the component-interactions that are required. MITIGATE envisages t...
Ellebæk, Mark Bremholm; Qvist, Niels; Schrøder, Henrik Daa; Rasmussen, Lars
2016-06-01
Introduction The treatment of esophageal atresia (OA) is challenging. The main goal is to achieve primary anastomosis. We have previously demonstrated in a pig model that intramural injection of botulinum toxin type A (BTX-A) resulted in significant elongation of the esophagus during tensioning until bursting point. The objectives of the present study were to investigate the influence of different amounts of intramural BTX-A on the stretch-tension characteristics and histological changes of the esophagus in piglets. Materials and Methods A total of 52 piglets were randomized to four groups receiving 2, 4, or 8 units/kg of BTX-A or isotonic saline (placebo). After a 1-hour of rest the esophagus was harvested and subjected to a stretch-tension test and histological examination to assess changes in the density of presynaptic vesicles in the nerve cells. Results Overall, 9 of the 52 animals were excluded from analysis due to problems with the stretch-tension test or death from anesthesia. The maximum loads were higher in the BTX-A groups (2 units/kg: +2.1 N; 4 units/kg: +1.3 N; 8 units/kg: +1.9 N) than the placebo (p = 0.046). There were no significant differences in percentage elongation, or histology. Conclusions This study demonstrated that injection of 2 units/kg BTX-A into a nonanastomosed esophageal wall resulted in a modest increase in the maximum load achieved before bursting; this may be due to the muscle-relaxant effect of BTX-A. BTX-A injection produced no significant effects on elongation or esophageal histology. The clinical usefulness of BTX-A in treatment of OA is still unclear. Georg Thieme Verlag KG Stuttgart · New York.
Gómez, Susana G; Bueren, Juan A; Faircloth, Glynn; Albella, Beatriz
2003-01-01
Acute cytotoxic exposure causes decreases in bone marrow progenitors that precedes the neutrophil nadir. Experiments in animal models reveal a relationship between the reduction in granulocyte-macrophage progenitors (CFU-GM) and the decrease in absolute neutrophil count [Toxicol. Pathol. 21 (1993) 241]. Recently, the prevalidation of a model for predicting acute neutropenia by the CFU-GM assay has been reported [Toxicol. In Vitro 15 (2001) 729]. The model was based on prediction of human MTD by adjusting the animal-derived MTD for the differential sensitivity between CFU-GM from animal species and humans. In this study, this model has been applied on a new antitumoral drug, Yondelis (Ecteinascidin; ET-743). Preclinical studies showed that hematotoxicity was the main side effect in mice, being the MTD of 600 microg/m2 [Drugs Future 21 (1996) 1155]. The sensitivity of myeloid progenitors was higher in mice than in humans, with IC90 values of 0.69+/-0.22 nM and 1.31+/-0.21 nM for murine and human CFU-GMs respectively. This study predicts a human MTD of 1145 microg/m2. The reported human MTD of ET-743 given as a 24-h continuous infusion every 3 weeks is 1800 microg/m2 [J. Clin. Oncol. 19 (2001) 1256]. Since our predicted MTD is within fourfold of the actual MTD (the interspecies variation in tolerated dose due to differences in clearance rates, metabolism pathways and infusion rate) the result confirms the profit of the prediction model.
Orlandi, M; Botti, A; Sghedoni, R; Cagni, E; Ciammella, P; Iotti, C; Iori, M
2016-12-01
Glioblastoma Multiforme (GBM) is the most common malignant brain tumor and frequently recurs in the same location after radiotherapy. Intensive treatment targeting localized lesion is required to improve GBM outcome, but dose escalation using conventional methods is limited by healthy tissue tolerance. Helical Tomotherapy (HT) Dose Painting (DP) treatments were simulated to safely deliver high doses in the recurrent regions. Apparent Diffusion Coefficient (ADC) data from five recurrent GBM were retrospectively considered for planning. Hypo-fractionated (25-50Gy, 5 fractions) voxel-based prescriptions were opportunely converted to personalized structured-based dose maps to create DP plans with a commercial Treatment Planning System. Optimized plans were generated and analyzed in terms of plan conformity to dose prescription (Q0.90-1.10), tolerance of the healthy tissues (DMAX), and dosimetry accuracy of the deliverable plans (γ-index). Only three of the five cases could receive a safe retreatment without violating the maximum critical organs dose constraints. The conformity of the simulated plans was between 40.9% and 79.9% (Q0.90-1.10), their delivery time was in the range of 38.3-63.6min, while the dosimetry showed γ-index of 82.4-92.4%. This study proved the ability of our method to simulate personalized, deliverable and dosimetrically accurate DPBN plans. HT hypo-fractionated treatments guided by ADC maps can be realized and applied to deliver high doses in the GBM recurrent regions, although there are some critical issues related to low Q0.90-1.10 values, to exceeding of healthy-tissue dose constraints for some patients and long delivery times. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Adaptive interpretation of gas well deliverability tests with generating data of the IPR curve
Sergeev, V. L.; Phuong, Nguyen T. H.; Krainov, A. I.
2017-01-01
The paper considers topical issues of improving accuracy of estimated parameters given by data obtained from gas well deliverability tests, decreasing test time, and reducing gas emissions into the atmosphere. The aim of the research is to develop the method of adaptive interpretation of gas well deliverability tests with a resulting IPR curve and using a technique of generating data, which allows taking into account additional a priori information, improving accuracy of determining formation pressure and flow coefficients, reducing test time. The present research is based on the previous theoretical and practical findings in the spheres of gas well deliverability tests, systems analysis, system identification, function optimization and linear algebra. To test the method, the authors used the field data of deliverability tests of two wells, run in the Urengoy gas and condensate field, Tyumen Oblast. The authors suggest the method of adaptive interpretation of gas well deliverability tests with the resulting IPR curve and the possibility of generating data of bottomhole pressure and a flow rate at different test stages. The suggested method allows defining the estimates of the formation pressure and flow coefficients, optimal in terms of preassigned measures of quality, and setting the adequate number of test stages in the course of well testing. The case study of IPR curve data processing has indicated that adaptive interpretation provides more accurate estimates on the formation pressure and flow coefficients, as well as reduces the number of test stages.
Landscape-scale learning: from lectures to professional deliverables
Follain, S.; Devaux, N.; Colin, F.
2009-04-01
Earth Science ingenieers (Master degree) need to be trained in multidisciplinary approaches but also to learn how to combine theoretical and practical knowledge. Nevertheless we notice it is not always easy to combine in a same lecture, theoretical and practical issues. In order to build bridges between these instructions we propose to student a new teaching unit: "Sustainability Diagnosis". Its originalities are i) to be couple to an other (theoretical) teaching unit dealing with landscape-scale learning ii) to be performed under a project mode and iii) to provide deliverables ordered by professional users, e.g. farmers, catchment managers. The landscape-scale learning is a classical learning period with lectures provided by specialists in various disciplines e.g. Soil Science, Hydrology, Agronomy, which focus on a common spatial scale, the landscape. It explicitly develops knowledge on energy and matter transfers between landscape components and explains potential effects of human-induced disturbances on both landscape and fluxes evolution. The deliverables for the farmer (chosen professional user) concern issues on his crop system sustainability. It requires a diagnosis in one hand on soil use and management potentialities and in another hand on environmental externalities (soil and water conservation) induced by the cropping system. The communication will present the work done by 14 students during this new teaching unit (Sustainability Diagnosis) of two weeks. This first attempt expertized a one square kilometer area located in Saint-Chinian vineyard region (South of France). This production area with guarantee of origin (AOC) has productivity constraints linked to landscape properties which directly impact farmer decisions. In the same time it has been shown that vineyard crop system induces water pollution by pesticides and increases soil degradation; in a sustainability perspective, these environmental impacts need to be reduced. The learning period was
The establishment of a new deliverability equation considering threshold pressure gradient
Li Lezhong; Li Xiangfang; He Dongbo; Xu Hanbing
2009-01-01
The flowing mechanism of a low permeability gas reservoir is different from a conventional gas reservoir, espe-cially for that with higher irreducible water saturation the threshold pressure gradient exists. At present, in all the deliverability equation, the additional pressure drop caused by the threshold pressure gradient is viewed as constant, but this method has big error in the practical application. Based on the non-Darcy steady flow equation, the limited integral of the additional pressure drop is solved in this paper and it is realized that the additional pressure drop is not a constant but has something to do with production data, and a new deliverability equation is derived, with the relevant processing method for modified isochronal test data. The new deliverability equation turns out to be practical through onsite applica-tion.
Computational Design of Metal-Organic Frameworks with High Methane Deliverable Capacity
Bao, Yi; Martin, Richard; Simon, Cory; Haranczyk, Maciej; Smit, Berend; Deem, Michael; Deem Team; Haranczyk Team; Smit Team
Metal-organic frameworks (MOFs) are a rapidly emerging class of nanoporous materials with largely tunable chemistry and diverse applications in gas storage, gas purification, catalysis, etc. Intensive efforts are being made to develop new MOFs with desirable properties both experimentally and computationally in the past decades. To guide experimental synthesis with limited throughput, we develop a computational methodology to explore MOFs with high methane deliverable capacity. This de novo design procedure applies known chemical reactions, considers synthesizability and geometric requirements of organic linkers, and evolves a population of MOFs with desirable property efficiently. We identify about 500 MOFs with higher deliverable capacity than MOF-5 in 10 networks. We also investigate the relationship between deliverable capacity and internal surface area of MOFs. This methodology can be extended to MOFs with multiple types of linkers and multiple SBUs. DE-FG02- 12ER16362.
In Silico Discovery of High Deliverable Capacity Metal-Organic Frameworks
Bao, Yi; Martin, Richard; Simon, Cory; Haranczyk, Maciej; Smit, Berend; Deem, Michael; Michael W. Deem Team; Maciej Haranczyk Team; Berend Smit Team
2015-03-01
Metal organic frameworks (MOFs) are actively being explored as potential adsorbed natural gas storage materials for small vehicles. Experimental exploration of potential materials is limited by the throughput of synthetic chemistry. We here describe a computational methodology to complement and guide these experimental efforts. The method uses known chemical transformations in silico to identify MOFs with high methane deliverable capacity. The procedure explicitly considers synthesizability with geometric requirements on organic linkers. We efficiently search the composition and conformation space of organic linkers for nine MOF networks, finding 48 materials with higher predicted deliverable capacity (at 65 bar storage, 5.8 bar depletion, and 298 K) than MOF-5 in four of the nine networks. The best material has a predicted deliverable capacity 8% higher than that of MOF-5. US Department of Energy.
Ludwig, Timothy D.; Geller, E. Scott
1991-01-01
A practical intervention program, targeting the safety belt use of pizza deliverers at two stores, increased significantly the use of both safety belts (143% above baseline) and turn signals (25% above baseline). Control subjects (i.e., pizza deliverers at a third no-intervention store and patrons driving to the pizza stores) showed no changes in belt or turn signal use over the course of 7-month study. The intervention program was staggered across two pizza stores and consisted of a group me...
Capabilities and costs for ancillary services provision by wind power plants. Deliverable D 3.1
Faiella, Mariano; Hennig, Tobias; Cutululis, Nicolaos Antonio;
This report is the deliverable of the third work package of the REserviceS project and describes the technical options and related costs for the provision of ancillary services specifically from wind energy technologies. It is focused on the set of ancillary services defined in the previous work...... package 2, shown in table 1 below. The information from this deliverable will be used as input to the case studies in subsequent work packages, which are expected to provide additional insights to the actual provision of ancillary services in transmission and distribution networks....
Pedro Henrique de Cerqueira Luz
2000-08-01
Full Text Available Numa pastagem degradada de capim-Tobiatã (Panicum maximum Jacq cv. Tobiatã, em Pirassununga - SP, instalou-se um experimento para verificar os efeitos de doses e tipos de calcário com ou sem incorporação, sobre o perfilhamento, a cobertura vegetal e a produtividade da pastagem durante seis cortes no período de 1996 a 1997. A produção de matéria seca não respondeu a tipos e doses de calcário, no entanto, a prática da incorporação com grade mostrou-se efetiva e, no tocante aos cortes, houve acréscimo de produção no verão e redução no inverno. A cobertura vegetal apresentou 72,8% de ocupação para planta forrageira e indicou tendência de menores áreas de solo descoberto nos tratamentos com calcário calcinado.In a degraded pasture of Tobiatã grass (Panicum maximum Jacq cv. Tobiatã, in Pirassununga - SP, an experiment was carried out to observe the effects of levels and types of limestone with or without incorporation on the tillering, ground cover and pasture productivity during six cuts from 1996 to 1997. There was no response in the dry matter yield to by the levels and types of limestone, however the practice of limestone incorporation using harrow was effective, and based in the cuts there was an increase of dry matter production in the summer and reduction in the winter. The ground cover presented 72.8% of occupation by the forage grass and it indicated a trend of bare ground in the treatments with calcinated limestone.
Moneta, D; Geroni, C; Valota, O; Grossi, P; de Jonge, M J A; Brughera, M; Colajori, E; Ghielmini, M; Sessa, C
2003-03-01
A haematotoxicity model was proposed by Parchment in 1998 to predict the maximum-tolerated dose (MTD) in humans of myelosuppressive antitumour agents by combining data from in vitro clonogenic assays on haematopoietic progenitors and in vivo systemic exposure data in animals. A prospective validation of this model in humans was performed with PNU-159548, a novel agent showing selective dose-limiting myelosuppression in animals. PNU-159548 and its main metabolite, PNU-169884, were tested in vitro on murine, canine and human colony forming units-granulocyte macrophages (CFU-GM) and in vivo on mice and dogs. The IC(90x) ratios (IC(x)=concentration inhibiting x% of colony growth) for CFU-GM and drug plasma protein binding were used to adjust the target plasma concentrations versus time curve (AUC) and predict the human MTD. The predicted MTD was compared with values achieved in phase I studies. Canine CFU-GM were 6-fold more sensitive (PCFU-GM 1.7-fold less sensitive (PCFU-GM ratio between species can predict the human MTD with a good quantitative accuracy. Inhibition of common haemopoietic progenitors by PNU-159548 induced neutropenia/thrombocytopenia in animals and thrombocytopenia in patients, probably due to the higher sensitivity to the compound observed in human colony forming units-megakaryocyte (CFU-MK).
Deliverable 1.2.7: Cross-cultural benefit segmentation of consumers
Reinders, M.J.; Onwezen, M.C.; Sijtsema, S.J.; Zimmermann, K.L.; Berg, van den I.; Jasiulewicz, A.; Guardia, M.D.; Guerrero, L.
2010-01-01
The present report, deliverable D.1.2.7, gives a final view of the work done in ISAFruit Work Package (WP) 1.2. Average Europe fruit consumption is below the recommended level and moreover the consumption level is still decreasing in Europe. A large survey was carried out in four European countries
Twisk, D. & Hoekstra, T.
2010-01-01
This deliverable describes the quality assurance process and its results. More specifically, the individual requirements and outcomes of the quality assurance process are reported. Furthermore, the implementation of the guidelines that were drawn up in advance are discussed. (Author/publisher)
Kyroudi, Archonteia; Petersson, Kristoffer; Ghandour, Sarah; Pachoud, Marc; Matzinger, Oscar; Ozsahin, Mahmut; Bourhis, Jean; Bochud, François; Moeckli, Raphaël
2016-08-01
Multi-criteria optimization provides decision makers with a range of clinical choices through Pareto plans that can be explored during real time navigation and then converted into deliverable plans. Our study shows that dosimetric differences can arise between the two steps, which could compromise the clinical choices made during navigation.
Fluctuations and predictability of wind and hydropower. Deliverable 2.1
Giebel, Gregor; Holttinen, H.; Söder, L.
2004-01-01
The report forms the deliverable D2.1 of the EU supported project Wind Power Integration in a Liberalised Electricity Market (WILMAR). The handling and generation of the necessary wind and hydro time series for the project’s power system planningsimulation model is described. The wind power and t...
Monitoring framework and description of indicators. Deliverable D5.1
Holtzer, A.C.G.; Giessen, A.M. van der; Munck, S.G.E. de; Poel, M.A.; Smets, R.C.J.
2012-01-01
This deliverable describes the monitoring framework that will be used to monitor and evaluate the GEN6 project and its nine pilots. The main topics are IPv6 uptake and governance, as described by the EC. Monitoring and evaluation will be done during the course of the project. This report describes t
Deliverable 1.2.7: Cross-cultural benefit segmentation of consumers
Reinders, M.J.; Onwezen, M.C.; Sijtsema, S.J.; Zimmermann, K.L.; Berg, van den I.; Jasiulewicz, A.; Guardia, M.D.; Guerrero, L.
2010-01-01
The present report, deliverable D.1.2.7, gives a final view of the work done in ISAFruit Work Package (WP) 1.2. Average Europe fruit consumption is below the recommended level and moreover the consumption level is still decreasing in Europe. A large survey was carried out in four European countries
Commandeur, J.J.F. Bijleveld, F.D. & Bergel, R.
2009-01-01
This deliverable provides an application of theories and methods documented in Deliverables 7.4 and 7.5 of work package 7 of the SafetyNet project. In this deliverable, use of select analysis techniques is demonstrated through real world road safety analysis problems, using aggregate data which may
A qualitative investigation into the so-called ministry of deliverance
J. Janse van Rensburg
2010-07-01
Full Text Available Since the publication of the book“The occult debate” (Janse van Rensburg, 1999 it has become clear that epistemological views on occultism within the reformed tradition have drastically diverged. During the General Synod of the Dutch Reformed Church in 2007, the report of the Algemene Taakspan vir Leer en Aktuele Sake (ATLASon a ministry of deliverance denied the existence of the devil and claimed that it would be un-scientific to embark on an empirical research in this regard, because of the impossibility to verify information gathered in this manner. However, it is the hypothesis of this article that qualitative information could assist in attaining a clearer under-standing of the need for a ministry of deliverance. In this article the methodology of the qualitative research is explained and the narratives of participants are revealed. Thereafter the respon-ses of the participants are evaluated.
Data needs and computational requirements for ST decision making. Internal deliverable ID6.2.1
Clement, Rémy; Tournebise, Pascal; Perkin, Samuel
at the interfaces between the short-term operation planning and real-time decision making process. Adhering to the title of the task, the various chapters in the deliverable discusses the exogenous factors, i.e., load forecasting, component failure rates and influence of weather and renewable energy sources......The objective of this deliverable is to present the requirements for adapting available tools/models and identifying data needs for probabilistic reliability analysis and optimal decision-making in the short-term decision making process. It will serve as a basis for the next tasks of GARPUR work...... written by several partners, two of them being European TSOs, and the four other being academic partners. Special attention has been paid to address every topic in short-term decision making process as considered within GARPUR, and so that no important issue has been forgotten in the grey zones...
Deliverable 2.2.1 Use Scenarios in the Pilot Services
Peterson, Carrie Beth
2010-01-01
and usage scenarios of service bundles to be implemented in the ISISEMD Pilot platform in each of the four regions during the pilot. The services/sub functions are presented as workflows and also described by presenting how the Graphical User Interface will be organized and designed thus reflecting how......Deliverable 2.2.1 - Use scenarios in the pilot services During the first phase of ISISEMD project (WP1) the user requirements analysis phase, mainly D1.1.1, has proposed a list of services organized in three groups (bundles as in Dow) and classified as mandatory or not mandatory and leading...... to different user scenarios. Afterwards services were analyzed, broken down in sub-functions. The outcome of this process represent a further input document guiding to identify and adapt the more suitable product and solution enabled to provide these required services. Deliverable D.2.2.1 describes guidance...
Tølle, Martin; Zwegers, Arian; Vesterager, Johan
2003-01-01
introduction to the document is given in Chapter 1. - Chapter 2 introduces a vision for collaborative commerce and positions this deliverable in the overall context of the GLOBEMEN project and the GLOBEMEN vision in particular. - Chapter 3 describes the basic concepts applied in GLOBEMEN. This includes “one-of-a-kind-production......”, the concept of the Virtual Enterprise (VE) and the Virtual Enterprise Reference Architecture (VERA) developed in GLOBEMEN. - In Chapter 4 a so-called life history example is presented to give a possible scenario about how an enterprise network could evolve over time, and to point out the need for preparing....... IMS Globemen is an inter-regional project aiming to develop methods, tools and architectures to support inter-enterprise operations in one-of-kind industries, in different lifecycle phases. This deliverable describes an architectural framework VERAM including a description/elaboration of its elements...
Guidelines for Inter-Enterprise Management (IEM), GLOBEMEN Deliverable D23
Tølle, Martin; Vesterager, Johan
2002-01-01
to the Virtual Enterprise concept of GLOBEMEN as well as introducing different types of partnership and information needs related to each of these. - Chapter 4 gives a description of a Vision and Business Environment for OKP This section starts with a description of the holistic vision of how an OKP enterprise......This document is a deliverable of Work package 2 of the IMS Globemen (GMN) project: D23 Guidelines for Inter-Enterprise Management (IEM). IMS Globemen is an inter-regional project aiming to develop methods, tools and architectures to support inter-enterprise operations in one-of-kind industries...... (Specification). Correspondingly, this report operates as input for further generalisation over the lifecycle in D43: Guidelines for Virtual Manufacturing Enterprise. The main objective of the deliverable is to describe guidelines how companies may operate in an inter-enterprise environment supported by C-Project...
Deliverable 1.2.7: Cross-cultural benefit segmentation of consumers
Reinders, M.J.; Onwezen, M.C.; Sijtsema, S.J.; Zimmermann, K.L.; Berg, van den, Aad; Jasiulewicz, A.; Guardia, M.D.; Guerrero, L.
2010-01-01
The present report, deliverable D.1.2.7, gives a final view of the work done in ISAFruit Work Package (WP) 1.2. Average Europe fruit consumption is below the recommended level and moreover the consumption level is still decreasing in Europe. A large survey was carried out in four European countries that consisted of questions regarding the importance consumers attach to food related benefits in general and for specific situations, personal orientations of the consumers, personal characteristi...
Identification of preferred dipole design options and cost estimates: Deliverable D5.2
Tommasini, Davide
2017-01-01
This document contains a description of the preferred 16 Tesla dipole magnet baseline design with its expected performances. The document also includes an analysis of the individual merits and risks of the different, initial design options and gives a justification for the selection of the baseline design. The deliverable includes expected field levels, field errors and a cost estimate, which serve as input for the arc design consolidation.
Deliverable D4.1. Report on Background Risk Management/Assessment Systems Enhancement
König, Sandra; Kollmitzer, Christian; Latzenhofer, Martin; Schauer, Stefan; Gouvas, Panagiotis; Drosos, Nikos; Mouratidis, Haris; Pavlidis, Michalis; Rekleitis, Evangelos; Krantjias, Thanos; Papastergiou, Spyros; Patsakis, Constantinos; Polemi, Nieta; Glykos, Stamatios
2016-01-01
The scope of the current deliverable is to elaborate on the modifications that are needed in order to realize the MITIGATE platform. The purpose of the platform is to offer sophisticated services regarding cyber risk assessment, threat mitigation and simulation. . The risk assessment functionality aims to quantify the risks that derive from the various cyber vulnerabilities associated with specific assets that participate in a supply chain service. While, the mitigation and simulation functio...
Go-Lab Deliverable D1.4 Go-Lab classroom scenarios handbook
2015-01-01
This deliverable presents the Go-Lab scenarios handbook. This handbook offers six different scenarios that are meant to help teachers design ILSs. Each scenario represents a specific pedagogical method within the overall Go-Lab inquiry approach. The six Go-Lab inquiry scenarios are labelled as follows:• The basic scenario • The jigsaw approach • Six changing hats • Learning by critiquing • Structured controversy • Find the mistakeIn a later stage, when a suitable modelling tool has been found...
Deliverable 3.3.2 Specification of tests and test groups
Peterson, Carrie Beth; Mitseva, Anelia; Harpur, Jill
2009-01-01
for them. This will be done via integrating intelligent scalable ICT services which will be tested for a period of 12 months under realistic conditions. Offering the services could not be complete without evaluating quality of life improvement, user acceptance and user satisfaction with a representative......Deliverable 3.3.2: Specification of tests and test groups One of the main goals of the ISISEMD project is to offer innovative ICT services to improve the quality of life of elderly persons with cognitive problems or mild dementia and their informal and formal caregivers who provide every day care...
MITIGATE - Deliverable D2.2 - Evidence-driven maritime supply chain risk assessment approach
Polemi, Nineta; Papastergiou, Spyridon; Karantzias, Athanassios; Georgiakodis, Fotios; Patsakis, Constantinos; Ntrigkogias, Christos; Schauer, Stefan; Latzenhofer, Martin; König, Sandra; Göllner, Johannes; Buhl, Reiner; FIEDLER, Ralf; Bosse, Claudia; Mouratidis, Haris; Pavlidis, Michalis
2016-01-01
Deliverable 2.2 reports the outcomes of tasks T2.3 “Specifications of Mathematical Instruments, Risk and Assurance Models” and T2.4 “Evidence-Driven Maritime Supply Chain Risk Assessment (g-MSRA) Specifications” of work package WP2. The document identifies three methodologies and frameworks; viz. Secure Tropos, AECID and MEDUSA, that are deemed relevant to the project and provided input in the formulation of the MITIGATE methodology or can be exploited in the implementation phase of the MITIG...
Ensuring on-time quality data management deliverables from global clinical data management teams
Zia Haque
2010-01-01
Full Text Available The growing emphasis on off-site and off-shore clinical data management activities mandates a paramount need for adequate solutions geared toward on-time, quality deliverables. The author has been leading large teams that have been involved in successful global clinical data management endeavors. While each study scenario is unique and has to be approached as such, there are several elements in defining strategy and team structure in global clinical data management that can be applied universally. In this article, key roles, practices, and high-level procedures are laid out as a road map to ensure success with the model.
McIntosh, Chris; McNiven, Andrea; Jaffray, David A; Purdie, Thomas G
2016-01-01
Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present an atlas-based approach which learns a dose prediction model for each patient (atlas) in a training database, and then learns to match novel patients to the most relevant atlases. The method creates a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces any requirement for specifying dose-volume objectives for conveying the goals of treatment planning. A probabilistic dose distribution is inferred from the most relevant atlases, and is scalarized using a conditional random field to determine the most likely spatial distribution of dose to yield a specific dose prior (histogram) for relevant regions of interest. Voxel-based dose mimicking then converts the predicted dose distribution to a deliverable treatment plan dose distribution. In this study, we ...
Cost savings deliverables and criteria for the OST technology decision process
McCown, A.
1997-04-01
This document has been prepared to assist focus area (FA) technical and management teams in understanding the cost savings deliverables associated with a technology system during its research and development (R and D) phases. It discusses the usefulness of cost analysis in the decision-making process, and asserts that the level of confidence and data quality of a cost analysis is proportional to the maturity of the technology system`s development life cycle. Suggestions of specific investment criteria or cost savings metrics that a FA might levy on individual research projects are made but the final form of these elements should be stipulated by the FA management based on their rationale for a successful technology development project. Also, cost savings deliverables for a single FA will be more detailed than those for management of the Office of Science and Technology (OST). For example, OST management may want an analysis of the overall return on investment for each FA, while the FA program manager may want this analysis and the return on investment metrics for each technology research activity the FA supports.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Rockwell, Barnaby W.; Knepper, Daniel H.; Horton, John D.
2015-01-01
Multispectral satellite data acquired by the Landsat 5 Thematic Mapper (TM), Landsat 7 Enhanced Thematic Mapper Plus (ETM+), and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) sensors were processed and interpreted in support of the PRISM-II project (Second Projet de Renforcement Institutionnel du Secteur Minier de la Republique Islamique de Mauritanie). This report and accompanying maps constitute project deliverables 60–64. All digital data for use in Geographic Information System (GIS) and image processing software will be included in the GIS deliverable 92. Image maps in PDF format of the processed Landsat and ASTER scenes are referenced in the appendixes.
Macharis, C. Verbeke, A. Brucker, K. de Gelová, E. Weinberger, J. & Vašek, J.
2009-01-01
In this deliverable a multi-actor multi-criteria analysis (MAMCA) is performed for the strategic evaluation of a number of innovative systems contributing to the creation of a more forgiving and self-explaining road environment. This deliverable also formulates a number of guidelines aimed at making
Yu, Wei
2013-01-01
This dissertation applied the quantitative approach to the data gathered from online survey questionnaires regarding the three objects: Information Technology (IT) Portfolio Management, IT-Business Alignment, and IT Project Deliverables. By studying this data, this dissertation uncovered the underlying relationships that exist between the…
Appel, Gordon John
2017-03-01
This report is the milestone deliverable M4FT-17SN111102091 “Summary of Assessments Performed FY17 by SNL QA POC” for work package FT-17SN11110209 titled “Quality Assurance – SNL”. This report summarizes the FY17 assessment performed on Fuel Cycle Technologies / Spent Fuel and Waste Disposition efforts.
FUZZY COMPREHENSIVE EVALUATION OF GAS-WELL DELIVERABILITY%气井产能的模糊综合评价
蒋红梅
2009-01-01
Gas-well deliverability, as a dynamic performance parameter, is one of the important index for gas well evaluation.The prediction model for gas-well deliverability is established according to deliverability evaluation of logging data and comprehensive evaluation of fuzzy multiobjectives decision in fuzzy mathematics.Establishing Bayes discriminant function to the classified gas wells, making return judgement to them and verifying through cases can provide theory foundation for gas-well deliverability evaluation plan.%气井产能作为一个表示动态特征的参数,是气井评价的重要指标之一.依据测井数据进行产能评价,采用模糊数学中的模糊多目标决策综合评判建立了气井产能评价模型,通过对已分好类的气井建立贝叶斯判别函数,对其进行回判,并且通过实例进行验证,为气井产能评价方案提供了理论依据.
Yu, Wei
2013-01-01
This dissertation applied the quantitative approach to the data gathered from online survey questionnaires regarding the three objects: Information Technology (IT) Portfolio Management, IT-Business Alignment, and IT Project Deliverables. By studying this data, this dissertation uncovered the underlying relationships that exist between the…
Guidelines for Inter-Enterprise Management (IEM), GLOBEMEN Deliverable D23
Tølle, Martin; Vesterager, Johan
2002-01-01
This document is a deliverable of Work package 2 of the IMS Globemen (GMN) project: D23 Guidelines for Inter-Enterprise Management (IEM). IMS Globemen is an inter-regional project aiming to develop methods, tools and architectures to support inter-enterprise operations in one-of-kind industries...... itself to operate in a dynamic inter-enterprise environment such as a virtual enterprise. The list should be seen as checklist to go through, cowering mainly issues relevant when selecting partners and configuring virtual enterprise from an inter-enterprise management point of view. - Chapter 6...... Description of the solution, C-Project Describes the solution developed as a part of GLOBEMEN to partially support inter-enterprise management as described in the vision. The solution, C-Project, is a software application that supports collaborative project (hence the name C-project). - Chapter 7 contain...
From Field Work to Deliverables. Experiences on the Tin House Courtyard Documentation
Bello Caballero, L.; Mezzino, D.; Federman, A.; Santana Quintero, M.
2017-08-01
The Tin House Courtyard is a property of the National Capital Commission (NCC) in Ottawa, Canada. The site is located within the `Mile of History', a historical route running from Parliament Hill to the Governor General's residence. Currently, existing assets are under intervention works that include several preservation and renewal actions. Within the broader project, one of the tasks before construction works started was the documentation of the set of facades. The Carleton Immersive Media Studio (CIMS) at Carleton University in Ottawa was commissioned by NCC to conduct the recording of the area. This paper describes the process undertaken from field work to the final deliverable to the client, as well as the issues faced in between. Nowadays, up to date surveying technologies have revolutionized the methodologies for cultural heritage documentation. In this regard, the recording strategy employed encompassed the use of photogrammetry, laser scanner, total station, as well as different pre and post processing software in order to generate the desired outcomes.
CO2-emission trading and green markets for renewable electricity. Wilmar - deliverable 4.1
Azuma-Dicke, N.; Morthorst, Poul Erik; Ravn, H.F.
2004-01-01
This report is Deliverable 4.1 of the EU project “Wind Power Integration in Liberalised Electricity Markets” (WILMAR) and describes the application of two policy instruments, Tradable Emissions Permits (TEP’s) and Tradable Green Certificates (TGC’s) forelectricity produced from renewable energy...... generation may change the situation from earning money to losing money despite the increasing spot price. Heavy restrictions on emissions penalise thefossil-fuelled technologies significantly, and the associated increase in the spot price need not compensate for this. Therefore, a market of TEP’s is expected...... to have a significant influence on the electricity spot price. However, the expected pricelevel of TEP’s are met with great uncertainty and a study of a number of economical studies shows a price span between zero and 270 USD per ton of CO2 depending on the participation or non-participation of countries...
Younge, Kelly C; Roberts, Don; Janes, Lindsay A; Anderson, Carlos; Moran, Jean M; Matuszak, Martha M
2016-07-08
The purpose of this study was to evaluate the ability of an aperture complexity metric for volumetric-modulated arc therapy (VMAT) plans to predict plan delivery accuracy. We developed a complexity analysis tool as a plug-in script to Varian's Eclipse treatment planning system. This script reports the modulation of plans, arcs, and individual control points for VMAT plans using a previously developed complexity metric. The calculated complexities are compared to that of 649 VMAT plans previously treated at our institution from 2013 to mid-2015. We used the VMAT quality assurance (QA) results from the 649 treated plans, plus 62 plans that failed pretreatment QA, to validate the ability of the complexity metric to predict plan deliverability. We used a receiver operating characteristic (ROC) analysis to determine an appropriate complexity threshold value above which a plan should be considered for reoptimization before it moves further through our planning workflow. The average complexity metric for the 649 treated plans analyzed with the script was 0.132 mm-1 with a standard deviation of 0.036 mm-1. We found that when using a threshold complexity value of 0.180 mm-1, the true positive rate for correctly identifying plans that failed QA was 44%, and the false-positive rate was 7%. Used clinically with this threshold, the script can identify overly modulated plans and thus prevent a significant portion of QA failures. Reducing VMAT plan complexity has a number of important clinical benefits, including improving plan deliverability and reducing treatment time. Use of the complexity metric during both the planning and QA processes can reduce the number of QA failures and improve the quality of VMAT plans used for treatment.
Ludwig, T D; Geller, E S
1991-01-01
A practical intervention program, targeting the safety belt use of pizza deliverers at two stores, increased significantly the use of both safety belts (143% above baseline) and turn signals (25% above baseline). Control subjects (i.e., pizza deliverers at a third no-intervention store and patrons driving to the pizza stores) showed no changes in belt or turn signal use over the course of 7-month study. The intervention program was staggered across two pizza stores and consisted of a group meeting wherein employees discussed the value of safety belts, received feedback regarding their low safety belt use, offered suggestions for increasing their belt use, and made a personal commitment to buckle up by signing buckle-up promise cards. Subsequently, employee-designed buckle-up reminder signs were placed in the pizza stores. By linking license plate numbers to individual driving records, we examined certain aspects of driving history as moderators of pre- and postintervention belt use. Although baseline belt use was significantly lower for drivers with one or more driving demerits or accidents in the previous 5 years, after the intervention these risk groups increased their belt use significantly and at the same rate as drivers with no demerits or accidents. Whereas baseline belt use was similar for younger (under 25) and older (25 or older) drivers, younger drivers were markedly more influenced by the intervention than were older drivers. Individual variation in belt use during baseline, intervention, and follow-up phases indicated that some drivers require more effective and costly intervention programs to motivate their safe driving practices.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-05-01
Sandia National Laboratories (SNL) Fuel Cycle Technologies (FCT) program activities are conducted in accordance with FCT Quality Assurance Program Document (FCT-QAPD) requirements. The FCT-QAPD interfaces with SNL approved Quality Assurance Program Description (SNL-QAPD) as explained in the Sandia National Laboratories QA Program Interface Document for FCT Activities (Interface Document). This plan describes SNL's FY16 assessment of SNL's FY15 FCT M2 milestone deliverable's compliance with program QA requirements, including SNL R&A requirements. The assessment is intended to confirm that SNL's FY15 milestone deliverables contain the appropriate authenticated review documentation and that there is a copy marked with SNL R&A numbers.
Thuesen, Christian; Hvam, Lars
2013-01-01
This paper presents a set of insights to be used in the development of business models for off-site system deliveries contributing to the development of Off-Site Manufacturing practices (OSM). The theoretical offset for discussing the development of business models is the blue ocean strategy...... in the constant pursue of value creation and cost reduction. On the basis of that system deliverances represent a promising strategy in the future development and application of off-site manufacturing practices. The application of system deliveries is however demanding as it represents a fundamental shift...... in the existing design and production practices. More specifically the development of system deliveries requires: (1) an explicit market focus, enabling the achievement of economy of scale, (2) a coordinated and coherent development around the system deliverance focusing on its internal and external modularity...
Liao, Yijun; Zhang, Lin; Weston, Mitchell H; Morris, William; Hupp, Joseph T; Farha, Omar K
2017-08-17
M-MOF-74s were examined for potential applications in ethylene abatement and/or storage/delivery. Due to labile binding resulting from a Jahn-Teller distortion, Cu-MOF-74 exhibits a gradual initial uptake that, in turn, translates into the highest deliverable capacity among the MOFs examined (3.6 mmol g(-1)). In contrast, Co-MOF-74 is the most promising candidate for ethylene abatement due to the sharp uptake at low pressure.
An editorial approach: Mike Nelson’s corridors and The Deliverance and The Patience
Helen Hughes
2015-06-01
Full Text Available This essay contrasts the contemporary British artist Mike Nelson’s approach to constructing his large, multi-room installations with his approach to editing the numerous artist books that he has produced since 2000. This comparison reveals several compositional symmetries between the two, namely pertaining to narrative non-linearity and meta-fictionality. The logic of montage is shown to similarly underscore both the books and the installations. This essay argues that the corridors connecting the different rooms of Nelson’s installations function in a similar way to the logic of montage: they play an integral role as the support that binds the structure of the installation (its multiple rooms together as a whole. This essay argues that the corridor is the primary viewing framework of the installation for the viewer, and that this vantage point is significant because the necessarily partial vision of the installation from the space of the corridor demonstrates the logic of installation art more broadly. I conclude by mapping the key compositional elements of Nelson’s artist books onto his installations, taking the 2001 work The Deliverance and The Patience as a case study, to show that the books do not exemplify the artwork as with traditional exhibition catalogues, but rather parallel it. That is, a structural continuity is established between these two facets of his work.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
The iTREN-2030 reference scenario until 2030. Deliverable D4
Fiorello, Davide; De Stasio, Claudia; Koehler, Jonathan; Kraft, Markus; Netwon, Sean; Purwanto, Joko; Schade, Burkhard; Schade, Wolfgang; Szimba, Eckhard
2009-07-01
The basic objective of iTREN-2030 is to extend the forecasting and assessment capabilities of the TRANS-TOOLS transport model to the new policy issues arising from the technology, environment and energy fields. This is achieved by couplin the TRANS-TOOLS model with three other models, ASTRA, POLES and TREMOVE covering these new policy issues. The TRANS-TOOLS transport network model has been developed to constitute the reference tool for supporting transport policy in the EU and currently is being developed in several European projects. The scenario set-up to be developed in iTREN-2030 has been modified, so that the projects develops a reference scenario and an integrated scenario. For the reference scenario, the three other modelling tools are harmonised with TRANS-TOOLS and made consistent with each other. This results in a coherent scenario for Europe until 2030 for technology, transport, energy, environment and economic development. The integrated scenario will consider the changing framework conditions until 2030, inparticular the policy pressure coming from climate policy and the increasing scarcity of fossil fuels as well as the impact of the financial and economic crisis. Within the iTREN-2030 project, the overall objective of Work Package 4 (WP4) producing tis deliverable is to develop the reference scenario for the quantitative projections using the four modelling tools involved in the project. The main aims of WP4 are to (a) define a consistent framework for using the different tools in an integrated way; (b) calibrate models with exchanged input to a coherent joint reference; (c) implement external input from WP3 and running models for projections; (d) produce output procedures and templates to facilitate assessment in WP5.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Wildgaard, Lorna Elizabeth; Larsen, Birger; Schneider, Jesper
2014-01-01
We collected publication and citation data in two databases to investigate the extent performance of author-level indicators are effected by choice of database, the stability of indicators across databases and ultimately to illustrate how differences in the computed indicators change our perception...... of individual researchers. In this report we begin by comparing database coverage, coverage at seniority and gender-level and then the performance of four basic indicators computed in both databases. In the main deliverable 5.4a, we investigate in a cluster analysis the performance of our previously identified...
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Fagerlind, H. Armstrong, J. Atala, D. Dodson, E. Talbot, R. Hill, J. Roynard, M. Martensen, H. Jänsch, M. Margaritis, D. Ocampo Sanchez, A. Peez, J. Ferrer, A. Elslande, P. van Perrin, C. Hermitte, T. Giustiniani, G. & Davidse, R.
2015-01-01
This deliverable is intended to collate the DaCoTA training manual and draft protocols for in-depth road accident investigations in Europe. This deliverable aims at producing a document that will be used in the training of new and previously experienced teams in the realm of in depth accident data c
Brunsting, S.; Pol, M.; Mastop, E.A. [ECN Policy Studies, Energy research Centre of the Netherlands ECN, Amsterdam (Netherlands); Kaiser, M.; Zimmer, R. [Unabhaengiges Institut fuer Umweltfragen UfU, Berlin (Germany); Shackley, S.; Mabon, L.; Howell, R. [Scottish Carbon Capture and Storage SCCS, Edinburg, Scotland (United Kingdom)
2012-08-15
At local level, public support has proven crucial to the implementation of CO2 capture and storage (CCS) demonstration projects. Whereas no method exists to guarantee public acceptability of any project, a constructive stakeholder and community engagement process does increase the likelihood thereof. This deliverable is a follow-up to deliverable D8.1 'Social site characterisation'. Social site characterisation can be used as an instrument to explore, plan and evaluate a process of active and constructive local stakeholder and citizen engagement in a prospective CCS project as a parallel activity to technical site characterisation. It serves as an analytical tool to describe the local social circumstances in the area and to design and evaluate stakeholder and community engagement efforts with the aims of building trust and raising public awareness. Using results from the social site characterisation of the area, the present deliverable focuses on the second purpose. It presents results from public engagement activities designed to raise public awareness and inform public opinion of a prospective CCS site in Poland (onshore) and the UK (offshore): focus conferences. Furthermore, by initiating an enhanced cooperation in planning of new storage sites between project developers, authorities and the local public, focus conferences aim to serve as a 'hinge' between social site characterisation as a research effort and application to real-life project settings. The focus conferences are part of a range of public engagement activities including the setup of public information websites on generic and site-specific CCS, information meetings. A second survey eventually shall evaluate the results of the public engagement activities. The aim of the focus conferences was to raise public awareness and assist public opinion forming processes of a prospective CCS site in Poland (onshore) and the UK (offshore). At the same time, it aimed to present and test a
Luz Pedro Henrique de Cerqueira
2002-01-01
Full Text Available As características agronômicas da pastagem dependem da qualidade do solo. Neste trabalho foram avaliados os efeitos de tipos e doses de calcário, com e sem incorporação, em algumas características agronômicas de uma pastagem degradada de Panicum maximum Jacq. cv. Tobiatã, num Latossolo Vermelho distrófico. As avaliações ocorreram durante quatro cortes consecutivos no período das chuvas ("verão" de 1995/96 e um corte no período seco ("inverno" em 1996. A produção de matéria seca do capim-Tobiatã aumentou com o método de incorporação com grade, devido ao efeito mecânico, porém não respondeu aos tipos e doses de calcário, sendo observadas as maiores produções no 4º corte ("verão". Foram observados aumentos na cobertura de solo pela planta forrageira para os cortes de verão, com decréscimo no de inverno, porém com valor superior ao do início de verão, enquanto que a área de solo descoberto apresentou comportamento oposto permanecendo constante a participação das plantas invasoras. Na avaliação de perfilhamento encontrou-se resposta à incorporação, sendo consistente com os dados de produção.
Thuesen, Leif; Galløe, Anders; Thayssen, Per;
2005-01-01
AIMS: To compare deliverability and in-hospital complications in implantation of BxSonic(R), Express(R), and Flexmaster(R) coronary stents in a randomized multicenter trial in five Danish interventional centres. METHODS AND RESULTS: Patients with planned stenting of at least one stenotic lesion i...
Filtness, A. & Papadimitriou, E. (Eds.) Leskovšek, B. Focant, N. Martensen, H. Sgarra, V. Usami, D.S. Soteropoulos, A. Stadlbauer, S. Theofilatos, A. Yannis, G. Ziakopoulos, A. Diamandouros, K. Durso, C. Goldenbeld, C. Loenis, B. Schermers, G. Petegem, J.-H. van Elvik, R. Hesjevoll, I.S. Quigley, C. & Papazikou, E.
2017-01-01
The present Deliverable (D5.1) describes the identification and evaluation of infrastructure related risk factors. It outlines the results of Task 5.1 of WP5 of SafetyCube, which aimed to identify and evaluate infrastructure related risk factors and related road safety problems by (i) presenting a
Schagen, I.N.L.G. Bernhoft, I.M. Erke, A. Ewert, U. Kallberg, V.-P. & Skladana, P.
2008-01-01
This report is the Deliverable of task 4.3a of the PEPPER project. It describes the good practice requirements regarding data, data collection and data use for monitoring and evaluating Traffic Law Enforcement (TLE). The aim is that, eventually, individual police forces/countries put the identified
Janssen, S.A.; Vos, H.; Koopman, A.
2014-01-01
The objective of the present deliverable is to describe and assess reported health impacts of vibration among residents living near railway lines, in particular the response to freight trains. To this end, first a state of the art overview is given of the results from all field studies reported so f
Christoph, M.; Nes, N. van; Pauwelussen, J.J.A.; Mansvelders, R.; Horst, A.R.A. van der; Hoedemaeker, D.M.
2010-01-01
The main objective of the project PROLOGUE (PROmoting real Life Observations for Gaining Understanding of road user behaviour in Europe) is to explore the feasibility and usefulness of a large-scale European naturalistic driving observation study. The work described in this deliverable focused on th
Hels, T. Bernhoft, I.M. Lyckegaard, A. Houwing, S. Hagenzieker, M.P. Legrand, S.-A. Isalberti, C. Van der Linden, T. & Verstraete, A.
2011-01-01
The objective of this deliverable is to assess the risk of driving with alcohol, illicit drugs and medicines in various European countries. In total nine countries participated in the study on relative risk of serious injury/fatality while positive for psychoactive substances. Six countries contribu
Antoniou, C. Brandstaetter, C. Bergel, R. Cherfi, M. Bijleveld, F. Commandeur, J.J.F. Blois, C. de Dupont, E. Gatscha, M. Martensen, H. Papadimitriou, E. Vanlaar, W. & Yannis, G.
2007-01-01
This deliverable gives the theoretical background for the two families of analyses, multilevel and time series analysis. For each technique the objectives, detailed model formulation, and assumptions are described and subsequently the technique is illustrated with an empirical example relevant to
Angermann, A. Antoniou, C. Bergel, R. Berends, E. Bijleveld, F. Brandstätter, C. Cherfi, M. Blois, C. de Dupont, E. Gatscha, M. Martensen, H. Papadimitriou, E. & Yannis, G.
2007-01-01
This deliverable contains the manual to support the methodology report in D7.4, where the theoretical background for multilevel and time series analyses is given. For each technique described in the methodology report, this manual presents the instructions to fit the models on the basis of user
Catrinu-Renstrom, Maria; Clement, Rémy; Tournebise, Pascal
The objective of this deliverable is to present the requirements for adapting available tools/models and identifying data needs for reliability analysis and optimal decision-making for asset management decision making process. It will serve as a basis for the next tasks of GARPUR work package 5...... addressing to the requirements of RMAC criterion developed in work package 2. The report has been written by several partners, three of them being European TSOs, and the three other being academic partners. Special attention has been paid to address every topic in asset management decision making process...... decision making process, as described in work package 2. Some advanced models exist in scientific literature to characterize the spatio-temporal variation and correlations of relevant factors. Some of these models have been proposed in academia, and offer improved representation with respect to those...
Pages, J. [QUARAD and Radiology Dept., Vvije Universiteit Brussel (Belgium)
1998-07-01
The highest radiation doses levels received by radiologists are observed during interventional procedures. Doses to forehead and neck received by a radiologist executing angiographic examinations at the department of radiology at the academic hospital (AZ-VUB) have been measured for a group of 34 examinations. The doses to crystalline lens and the effective doses for a period of one year have been estimated. For the crystalline lens the maximum dose approaches the ICRP limit, that indicates the necessity for the radiologist to use leaded glasses. (N.C.)
Cronin, Tom; Bindner, Henrik; Zong, Yi
2008-11-15
This report represents Deliverable D.3.2 of Work Package 3 in the Night Wind project. The aim of this Work Package was to simulate a cold store (or number of cold stores) within a power system where there is a high degree of wind power penetration. The Night Wind Control System, developed as part of Work Package 5, was to be integrated into the simulations so that the wind power could be 'stored' in the cold store with maximum benefit to the electrical network, utility or cold store owner. To this end, the following have been accomplished: 1) The Night Wind concept has been described in terms of demand side management. 2) Input requirements and data have been specified and collected. Measured data from the existing cold store facility of Partner Logistics has been analysed. 3) Component models for the simulations (including the cold store model itself) have been developed for the simulation platform, IPSYS. 4) The Night Wind Control System (NWCS) from Work Package 5 has been developed so that it finishes computations within two minutes. 5) Controllers including the NWCS) have been operated with the cold store model within IPSYS. 6) Simulations have been performed with the cold store model and an increasing penetration of wind power. This report presents the results of the work undertaken in Work Package 3 which would have benefited from the additional time requested at the project meeting in March 2008, however, this extension of time was not granted. Nevertheless, the work that was possible is considered significantly complete, although it is acknowledged that there has been a delay in the presentation of this report. It should be noted that it was not possible to address the new aspects of Task 3.7 'Verification of simulation results' as there was no implementation of the night wind concept at the demonstration site (Task 7). Verification of the simulation of the present system has, naturally, been carried out and described in this report. (ln)
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Meyer, B.C.; Laplana, R.; Raley, M.; Baqueiro, O.; Kopeva, D.; Hautdidier, B.
2011-01-01
The Deliverable 6.4 “Handbook of efficient recommendations” deals with the main methodological developments in the context of screening, scoping and Sustainability Impact Assessment (SIA) developed and discussed in PRIMA. The main topics of this handbook deal with key aspects of the methodological enhancement of Impact Assessment within the context of experiences in Environmental Impact Assessment and Strategic Environmental Assessment. Key research is linked to the questions abou...
Dijkstra, A. Bald, S. Benz, T. & Gaitanidou, E. (eds.)
2009-01-01
Road safety will most probably be influenced by introducing Advanced Driver Assistance Systems (ADAS) or Intelligent Vehicle Safety Systems (IVSS). The effects of these systems on road safety can be assessed in different ways. This document gives a short overview of methodologies which allow for assessing road safety effects (Chapter 2), This Deliverable gives an overview of the outcome of work package 3 of IN-SAFETY. Two methodologies have basically been applied: • simulation model • risk an...
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.
2009-12-01
The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Dose from slow negative muons.
Siiskonen, T
2008-01-01
Conversion coefficients from fluence to ambient dose equivalent, from fluence to maximum dose equivalent and quality factors for slow negative muons are examined in detail. Negative muons, when stopped, produce energetic photons, electrons and a variety of high-LET particles. Contribution from each particle type to the dose equivalent is calculated. The results show that for the high-LET particles the details of energy spectra and decay yields are important for accurate dose estimates. For slow negative muons the ambient dose equivalent does not always yield a conservative estimate for the protection quantities. Especially, the skin equivalent dose is strongly underestimated if the radiation-weighting factor of unity for slow muons is used. Comparisons to earlier studies are presented.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
PABLM. Accumulated Environment Radiation Dose
Napier, B.A.; Kennedy, W.E.Jr.; Soldat, J.K. [Pacific Northwest Lab., Richland, WA (United States)
1981-04-01
PABLM calculates internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides after the releases have ended. Radioactive decay is considered during the release, after deposition, and during holdup of food after harvest. The radiation dose models consider exposure to radionuclides deposited on the ground or crops from contaminated air or irrigation water, radionuclides in contaminated drinking water, aquatic foods raised in contaminated water, and radionuclides in bodies of water and sediments where people might fish, boat, or swim. For vegetation, the radiation dose model considers both direct deposition and uptake through roots. Doses may be calculated for either a maximum-exposed individual or for a population group. The program is designed to calculate accumulated radiation doses from the chronic ingestion of food products that contain radionuclides and doses from the external exposure to radionuclides in the environment. A first-year committed dose is calculated as well as an integrated dose for a selected number of years.
Radiation dose in neurological computed tomographic scanning
Whitmore, R.C.; Bushong, S.C.; Archer, B.A.; Glaze, S.A.
1979-07-01
Patient dose and dose distribution during neurologicl computed tomography examinations were determined with five different computed tomography scanners. Maximum intracranial doses ranged from 1.17 to 2.67 rads. Doses to the lens of the eye ranged from 0.23 to 2.81 rads. These levels are considered and compared with patient doses reported for other computed tomography studies and for conventional tomographic examinations. In general, patient dose during computer tomographic examinations is less than one quarter of that during conventional tomography of the head.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
2006-01-01
This thesis seeks to explore the dynamics between China’s onshore spot foreign exchange market and the offshore RMB non-deliverable forward (NDF) market before and after the reforms in exchange rate regime and foreign exchange market structures around July 21, 2005. Developments in the two markets are reviewed and daily closing rates of both markets are examined. The Johansen co-integration test finds a strong co-integrating relationship between the onshore spot rate and the NDF rate and a tw...
CO{sub 2}-emission trading and green markets for renewable electricity. WILMAR - deliverable 4.1
Azuma-Dicke, N.; Weber, C. [Univ. of Stuttgart, IER (Germany); Morthorst, P.E. [Risoe National Lab., Roskilde (Denmark); Ravn, H.F.; Schmidt, R. [Technical Univ. of Denmark, Lyngby (Denmark)
2004-06-01
This report is Deliverable 4.1 of the EU project 'Wind Power Inte-gration in Liberalised Electricity Markets' (WILMAR) and de-scribes the application of two policy instruments, Tradable Emis-sions Permits (TEPs) and Tradable Green Certificates (TGCs) for electricity produced from renewable energy sources in the European Union and the implications for implementation in the Wilmar model. The introduction of a common emission-trading system in the EU is expected to have an upward effect on the spot prices at the electric-ity market. The variations of the spot price imply that some types of power generation may change the situation from earning money to losing money despite the increasing spot price. Heavy restrictions on emissions penalise the fossil-fuelled technologies significantly, and the associated increase in the spot price need not compensate for this. Therefore, a market of TEPs is expected to have a signifi-cant influence on the electricity spot price. However, the expected price level of TEPs are met with great uncertainty and a study of a number of economical studies shows a price span between zero and 270 USD per ton of CO{sub 2} depending on the participation or non-participation of countries in the scheme. The price-determination at the TGC market is expected to be closely related to the price at the power spot market as the RE-producers of electricity will have expectations to the total price paid for the energy produced, i.e., for the price of electricity at the spot market plus the price per kWh obtained at the green certificate mar-ket. In the Wilmar model, the TGC market can either be handled exogenously, i.e., the increase in renewable capacity and an average annual TGC price are determined outside the model, or a simple TGC module is developed, including the long-term supply functions for the most relevant renewable technologies and an overall TGC quota. Both solutions are rather simple, but to develop a more ad-vanced model for the TGC
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Finn, Carol A.; Horton, John D.
2015-01-01
In 1996, at the request of the Government of the Islamic Republic of Mauritania, a team of U.S. Geological Survey (USGS) scientists produced a strategic plan for the acquisition, improvement and modernization of multidisciplinary sets of data to support the growth of the Mauritanian minerals sector and to highlight the geological and mineral exploration potential of the country. In 1999, the Ministry of Petroleum, Energy, and Mines of the Islamic Republic of Mauritania implemented a program for the acquisition of the recommended basic geoscientific information, termed the first Projet de Renforcement Institutionnel du Secteur Minier (Project for Institutional Capacity Building in the Mining Sector, PRISM-I). As a result of the PRISM-I efforts, a great deal of new geological, geophysical, geochemical, remote sensing, and hydrological data became available for evaluation and synthesis. However, the Ministry of Petroleum, Energy, and Mines recognized that additional work was required to extract the full benefit of the data before it could be of greatest use to the international community and of benefit to the Mauritanian minerals and development sector.
Robison, W.L.; Conrado, C.L.; Bogen, K.T
1999-10-06
On March 1, 1954, radioactive fallout from the nuclear test at Bikini Atoll code-named BRAVO was deposited on Utirik Atoll which lies about 187 km (300 miles) east of Bikini Atoll. The residents of Utirik were evacuated three days after the fallout started and returned to their atoll in May 1954. In this report we provide a final dose assessment for current conditions at the atoll based on extensive data generated from samples collected in 1993 and 1994. The estimated population average maximum annual effective dose using a diet including imported foods is 0.037 mSv y{sup -1} (3.7 mrem y{sup -1}). The 95% confidence limits are within a factor of three of their population average value. The population average integrated effective dose over 30-, 50-, and 70-y is 0.84 mSv (84, mrem), 1.2 mSv (120 mrem), and 1.4 mSv (140 mrem), respectively. The 95% confidence limits on the population-average value post 1998, i.e., the 30-, 50-, and 70-y integral doses, are within a factor of two of the mean value and are independent of time, t, for t > 5 y. Cesium-137 ({sup 137}Cs) is the radionuclide that contributes most of this dose, mostly through the terrestrial food chain and secondarily from external gamma exposure. The dose from weapons-related radionuclides is very low and of no consequence to the health of the population. The annual background doses in the U. S. and Europe are 3.0 mSv (300 mrem), and 2.4 mSv (240 mrem), respectively. The annual background dose in the Marshall Islands is estimated to be 1.4 mSv (140 mrem). The total estimated combined Marshall Islands background dose plus the weapons-related dose is about 1.5 mSv y{sup -1} (150 mrem y{sup -1}) which can be directly compared to the annual background effective dose of 3.0 mSv y{sup -1} (300 mrem y{sup -1}) for the U. S. and 2.4 mSv y{sup -1} (240 mrem y{sup -1}) for Europe. Moreover, the doses listed in this report are based only on the radiological decay of {sup 137}Cs (30.1 y half-life) and other
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Local skin and eye lens equivalent doses in interventional neuroradiology
Sandborg, Michael [Linkoeping University, Department of Radiological Sciences, Radiation Physics and Center for Medical Image Science and Visualisation (CMIV), Linkoeping (Sweden); Linkoeping University Hospital, Department of Medical Physics, Linkoeping (Sweden); Rossitti, Sandro [Linkoeping University Hospital, Department of Neurosurgery, Linkoeping (Sweden); Pettersson, Haakan [Linkoeping University Hospital, Department of Medical Physics, Linkoeping (Sweden)
2010-03-15
To assess patient skin and eye lens doses in interventional neuroradiology and to assess both stochastic and deterministic radiation risks. Kerma-area product (P{sub KA}) was recorded and skin doses measured using thermoluminescence dosimeters. Estimated dose at interventional reference point (IRP) was compared with measured absorbed doses. The average and maximum fluoroscopy times were 32 and 189 min for coiling and 40 and 144 min for embolisation. The average and maximum P{sub KA} for coiling were 121 and 436 Gy cm{sup 2}, respectively, and 189 and 677 Gy cm{sup 2} for embolisation. The average and maximum values of the measured maximum absorbed skin doses were 0.72 and 3.0 Sv, respectively, for coiling and 0.79 and 2.1 Sv for embolisation. Two out of the 52 patients received skin doses in excess of 2 Sv. The average and maximum doses to the eye lens (left eye) were 51 and 515 mSv (coiling) and 71 and 289 mSv (embolisation). The ratio between the measured dose and the dose at the IRP was 0.44 {+-} 0.18 mSv/mGy indicating that the dose displayed by the x-ray unit overestimates the maximum skin dose but is still a valuable indication of the dose. The risk of inducing skin erythema and lens cataract during our hospital procedures is therefore small. (orig.)
Brunsting, S.; Mastop, E.A. [ECN Policy Studies, Energy research Centre of the Netherlands ECN, Amsterdam (Netherlands); Kaiser, M.; Zimmer, R. [Unabhaengiges Institut fuer Umweltfragen UfU, Berlin (Germany)
2013-06-15
This report describes the results of the last stage of the in-depth social site characterisation activities at two prospective CCS sites as part of the SiteChar project: a CCS onshore site and a CCS offshore site. The onshore site is the Zalecze and Zuchlow site application (Poland - WP5) and the offshore site is the North Sea Moray Firth site (UK - WP3). This deliverable describes the results from a repeated quantitative measurement of local awareness, knowledge, and perceptions of CCS at both sites using representative surveys. For comparison and discussion of all SiteChar WP8 results we refer to the final summary report D8.5. The 2nd survey showed some interesting results. First of all, awareness of CCS was still very low. While in the UK around half of the respondents had at least heard of local plans for CCS, in Poland this was only 21%. It seems that awareness in the UK was mostly induced by specific plans in the area that were abandoned in the course of the SiteChar project. Second, it seems that on the whole the local publics were rather positive about CCS. Most respondents expected a positive impact of CCS on the region. In the UK, arguments for that were mainly economic, while in Poland arguments were mainly related to environmental concerns. Although there are some worries about risks of leakage, especially at the onshore site in Poland, people think that authorities will properly regulate CCS and monitor the safety of CCS. Expectations were mostly that it would be good for the country and that it will help reach international targets for CO2 reduction and buy time to develop renewable energy. Respondents seemed uncertain about the costs of using CCS and whether the technique is ready for widespread use. Especially in Poland people seemed to agree that CCS is essential for tackling climate change. Most differences between the two sites may be attributed to the proximity of the site to the local community. The Polish site is onshore and therefore much
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Evaluation of Rectal Dose During High-Dose-Rate Intracavitary Brachytherapy for Cervical Carcinoma
Sha, Rajib Lochan [Department of Radiation Physics, Indo-American Cancer Institute and Research Centre, Hyderabad (India); Department of Physics, Osmania University, Hyderabad (India); Reddy, Palreddy Yadagiri [Department of Physics, Osmania University, Hyderabad (India); Rao, Ramakrishna [Department of Radiation Physics, MNJ Institute of Oncology and Regional Cancer Center, Hyderabad (India); Muralidhar, Kanaparthy R. [Department of Radiation Physics, Indo-American Cancer Institute and Research Centre, Hyderabad (India); Kudchadker, Rajat J., E-mail: rkudchad@mdanderson.org [Department of Radiation Physics, University of Texas M. D. Anderson Cancer Center, Houston, TX (United States)
2011-01-01
High-dose-rate intracavitary brachytherapy (HDR-ICBT) for carcinoma of the uterine cervix often results in high doses being delivered to surrounding organs at risk (OARs) such as the rectum and bladder. Therefore, it is important to accurately determine and closely monitor the dose delivered to these OARs. In this study, we measured the dose delivered to the rectum by intracavitary applications and compared this measured dose to the International Commission on Radiation Units and Measurements rectal reference point dose calculated by the treatment planning system (TPS). To measure the dose, we inserted a miniature (0.1 cm{sup 3}) ionization chamber into the rectum of 86 patients undergoing radiation therapy for cervical carcinoma. The response of the miniature chamber modified by 3 thin lead marker rings for identification purposes during imaging was also characterized. The difference between the TPS-calculated maximum dose and the measured dose was <5% in 52 patients, 5-10% in 26 patients, and 10-14% in 8 patients. The TPS-calculated maximum dose was typically higher than the measured dose. Our study indicates that it is possible to measure the rectal dose for cervical carcinoma patients undergoing HDR-ICBT. We also conclude that the dose delivered to the rectum can be reasonably predicted by the TPS-calculated dose.
Dose-mapping distribution around MNSR
Jamal, M H
2002-01-01
The aim of this study is to establish the dose-rate map through the determination of radiological dose-rate levels in reactor hall, adjacent rooms, and outside the MNSR facility. Controlling dose rate to reactor operating personnel , dose map was established. The map covers time and distances in the reactor hall, during reactor operation at nominal power. Different measurement of dose rates in other areas of the reactor buildings was established. The maximum dose rate, during normal operation of the MNSR was 40 and 21 Sv/hr on the top of the reactor and near the pool fence, respectively. Whereas, gamma and neutron doses have not exceeded natural background in all rooms adjacent to the reactor hall or nearly buildings. The relation between the dose rate for gamma rays and neutron flux at the top of cover of reactor pool was studied as well. It was found that this relation is linear.
Christofori, E.; Bierwagen, J.
2013-07-01
Recording Cultural Heritage objects using terrestrial laserscanning becomes more and more popular over the last years. Since terrestrial Laserscanning System (TLS) Manufacturers have strongly increased the amount and speed of data captured with a single scan at each system upgrade and cutting down system costs the use of TLS Systems for recording cultural heritage is an option for recording worth to think about beside traditional methods like Photogrammetric. TLS Systems can be a great tool for capturing complex cultural heritage object within a short amount of time beside the traditional methods but can be a nightmare to handle for further process if not used right while capturing. Furthermore TLS Systems still have to be recognized as survey equipment, even though some of the manufactures promote them as everyday tool. They have to be used in an intelligent way having in mind the clients and the individual cultural objects needs. Thus the efficient way to use TLS Systems for data recording becomes a relevant topic to deal with the huge Amount of data the Systems collect while recording. Already small projects can turn into huge Pointcloud Datasets that End user, like Architects or Archaeologist neither can't deal with as their technical equipment doesn't fit the requirements of the Dataset nor do they have the software tools to use the Data as the current software tools still are high prized. Even the necessary interpretation of the Dataset can be a tough task if the people who have to work on with the Pointcloud aren't educated right in order to understand TLS and the results it creates. The use of TLS Systems has to have in mind the project requirements of the individual Heritage Object, like the required accuracy, standards for Levels of Details (e.g. "Empfehlungen für die Baudokumentation, Günther Eckstein, Germany"), the required kind of Deliverables (Visualization, 2D Drawings, True Deformation Drawings, 3D Models, BIM or 4D - Animations) as well as the
Dose-shaping using targeted sparse optimization
Sayre, George A.; Ruan, Dan [Department of Radiation Oncology, University of California - Los Angeles School of Medicine, 200 Medical Plaza, Los Angeles, California 90095 (United States)
2013-07-15
}{sup sparse} improves tradeoff between planning goals by 'sacrificing' voxels that have already been violated to improve PTV coverage, PTV homogeneity, and/or OAR-sparing. In doing so, overall plan quality is increased since these large violations only arise if a net reduction in E{sub tot}{sup sparse} occurs as a result. For example, large violations to dose prescription in the PTV in E{sub tot}{sup sparse}-optimized plans will naturally localize to voxels in and around PTV-OAR overlaps where OAR-sparing may be increased without compromising target coverage. The authors compared the results of our method and the corresponding clinical plans using analyses of DVH plots, dose maps, and two quantitative metrics that quantify PTV homogeneity and overdose. These metrics do not penalize underdose since E{sub tot}{sup sparse}-optimized plans were planned such that their target coverage was similar or better than that of the clinical plans. Finally, plan deliverability was assessed with the 2D modulation index.Results: The proposed method was implemented using IBM's CPLEX optimization package (ILOG CPLEX, Sunnyvale, CA) and required 1-4 min to solve with a 12-core Intel i7 processor. In the testing procedure, the authors optimized for several points on the Pareto surface of four 7-field 6MV prostate cases that were optimized for different levels of PTV homogeneity and OAR-sparing. The generated results were compared against each other and the clinical plan by analyzing their DVH plots and dose maps. After developing intuition by planning the four prostate cases, which had relatively few tradeoffs, the authors applied our method to a 7-field 6 MV pancreas case and a 9-field 6MV head-and-neck case to test the potential impact of our method on more challenging cases. The authors found that our formulation: (1) provided excellent flexibility for balancing OAR-sparing with PTV homogeneity; and (2) permitted the dose planner more control over the evolution of the PTV
Christoph, M. Nes, N. van Pauwelussen, J. Mansvelders, R. Horst, A.R.A. van der & Hoedemaeker, M.
2011-01-01
The main objective of the project PROLOGUE (PROmoting real Life Observations for Gaining Understanding of road user behaviour in Europe) is to explore the feasibility and usefulness of a large-scale European naturalistic driving observation study. The work described in this deliverable focused on th
Olson, C.L. [ECN Solar Energy, Petten (Netherlands)
2013-10-15
This deliverable makes available the life-cycle inventory used to calculate the energy payback time and the carbon footprint of the Apollon final concentrating photovoltaics (CPV) design developed.. The data below relates to one Apollon module. The results are to be published in Environmental Science and Technology, in a paper, 'Sustainability of Materials and Costs of Materials in a Mirror-based Concentrating Photovoltaic System'. Reference is made to the results for the Spectrolab triple junction solar cell in the following two studies: (1) 'Life cycle assessment of high-concentration photovoltaic systems' (Prog. Photovolt: Res. Appl., vol. 21, pp. 379-388, 2013), and (2) 'Life Cycle Analysis of Two New Concentrator PV Systems', in 23rd European Photovoltaic Solar Energy Conference, Valencia, Spain, 2008.
Francescon, Paolo, E-mail: paolo.francescon@ulssvicenza.it; Satariano, Ninfa [Department of Radiation Oncology, Ospedale Di Vicenza, Viale Rodolfi, Vicenza 36100 (Italy); Beddar, Sam [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77005 (United States); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, Indiana 46202 (United States)
2014-10-15
Purpose: Evaluate the ability of different dosimeters to correctly measure the dosimetric parameters percentage depth dose (PDD), tissue-maximum ratio (TMR), and off-axis ratio (OAR) in water for small fields. Methods: Monte Carlo (MC) simulations were used to estimate the variation of k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} for several types of microdetectors as a function of depth and distance from the central axis for PDD, TMR, and OAR measurements. The variation of k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} enables one to evaluate the ability of a detector to reproduce the PDD, TMR, and OAR in water and consequently determine whether it is necessary to apply correction factors. The correctness of the simulations was verified by assessing the ratios between the PDDs and OARs of 5- and 25-mm circular collimators used with a linear accelerator measured with two different types of dosimeters (the PTW 60012 diode and PTW PinPoint 31014 microchamber) and the PDDs and the OARs measured with the Exradin W1 plastic scintillator detector (PSD) and comparing those ratios with the corresponding ratios predicted by the MC simulations. Results: MC simulations reproduced results with acceptable accuracy compared to the experimental results; therefore, MC simulations can be used to successfully predict the behavior of different dosimeters in small fields. The Exradin W1 PSD was the only dosimeter that reproduced the PDDs, TMRs, and OARs in water with high accuracy. With the exception of the EDGE diode, the stereotactic diodes reproduced the PDDs and the TMRs in water with a systematic error of less than 2% at depths of up to 25 cm; however, they produced OAR values that were significantly different from those in water, especially in the tail region (lower than 20% in some cases). The microchambers could be used for PDD
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Strenge, D.L.; Peloquin, R.A.
1981-04-01
The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure mode are also printed if requested.
Can digoxin dose requirements be predicted?
Dobbs, S M; Mawer, G E; Rodgers, M; Woodcock, B G; Lucas, S B
1976-04-01
A search for patient variables relevant to digoxin dose requirements was made in fourty-three patients with a wide range of renal and hepatic function. The daily dose of digoxin to achieve a mean serum concentration of 1.5 ng/ml, the standardized dose, was calculated for each patient. The standardized dose correlated significantly with the following variables, in descending order of correlation coefficient; creatinine clearance, serum creatinine concentration, body weight and serum albumin concentration. An equation containing the two independent variables, creatinine clearance and serum albumin concentration, had a significantly stronger correlation with standardized dose than creatinine clearance alone. Attempts were made in each patient to predict the standardized dose using both empirical prescribing methods and the published nomograms. Although a maximum of 70% of the variance of the standardized dose was explained, this corresponded approximately to one patient in three having a predicted dose outside the 95% confidnece limits for the standardized dose. There remain important sources of individual variation in digoxin dose requirements yet to be identified. Future application of empirical prescribing methods, such as multiple linear regression and Bayes' theorem, to prescription for large, defined patient groups may improve dose prediction for individual patients.
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
2012-09-10
known that most drugs are chiral compounds, and in some cases, different enantiomers of a compound may have significantly different pharmacological...effects.27 However, most drugs on the market are racemic mixtures, presumably due to expenses in chiral synthesis/ separation, and, in most cases...which the drug molecules may interact. We therefore decided not to further consider the impact of molecular chirality on MRDD in the present study
Acoustic dose and acoustic dose-rate.
Duck, Francis
2009-10-01
Acoustic dose is defined as the energy deposited by absorption of an acoustic wave per unit mass of the medium supporting the wave. Expressions for acoustic dose and acoustic dose-rate are given for plane-wave conditions, including temporal and frequency dependencies of energy deposition. The relationship between the acoustic dose-rate and the resulting temperature increase is explored, as is the relationship between acoustic dose-rate and radiation force. Energy transfer from the wave to the medium by means of acoustic cavitation is considered, and an approach is proposed in principle that could allow cavitation to be included within the proposed definitions of acoustic dose and acoustic dose-rate.
Lee, Gregory K.
2015-01-01
A digital elevation model (DEM) of the entire country of the Islamic Republic of Mauritania was produced using Shuttle Radar Topography Mission (SRTM) data as required for deliverable 65 of the contract. In addition, because of significant recent advancements of availability, seamlessness, and validity of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) global elevation data, the U.S. Geological Survey (USGS) extended its efforts to include a higher resolution countrywide ASTER DEM as value added to the required Deliverable 63, which was limited to five areas within the country. Both SRTM and ASTER countrywide DEMs have been provided in ERDAS Imagine (.img) format that is also directly compatible with ESRI ArcMap, ArcGIS Explorer, and other GIS applications.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Lesmana, Surya Budi; Putra, Sri Atmaja [Muhammadiyah University of Yogyakarta, Yogyakarta (Indonesia)
2011-10-15
The overall objective of the CASINDO programme is to establish a self-sustaining and self-developing structure at both the national and regional level to build and strengthen human capacity to enable the provinces of North Sumatra, Yogyakarta, Central Java, West Nusa Tenggara (WNT) and Papua to formulate sound policies for renewable energy and energy efficiency and to develop and implement sustainable energy projects. To achieve the CASINDO objective seven Technical Working Groups have been established with the aim to conduct the technical activities under the various work packages and to produce the agreed deliverables. This report presents results from Technical Working Group IV on Renewable Energy project development. Its main aims were: To identify suitable non-hydro RE projects that can be developed in the province; To conduct an energy needs assessment in a selected location; To develop a business plan for a proposed solution to the identified main energy problem of the target community; To identify potential investors; To construct the project.
Kelley, George A; Kelley, Kristi S; Callahan, Leigh F
2017-01-01
Introduction While anxiety is a major public health problem in adults with arthritis and other rheumatic diseases (AORD), the effects of exercise on anxiety in adults are not well established despite numerous studies on this topic. The purpose of this study is to conduct a systematic review with an aggregate data meta-analysis to determine the effects of community-deliverable exercise interventions (aerobic, strength training or both) on anxiety in adults with AORD. Methods and analysis Randomised controlled exercise intervention trials ≥4 weeks and published in any language up to 31 December 2016 will be included. Studies will be retrieved by searching 8 electronic databases, cross-referencing and expert review. Dual selection and abstraction of data will occur. The primary outcome will be changes in anxiety. Risk of bias will be assessed using the Cochrane risk of bias assessment instrument while confidence in the cumulative evidence will be assessed using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) instrument. Standardised effect sizes for anxiety will be calculated from each study and then pooled using the inverse variance heterogeneity (IVhet) model. Meta-regression based on the IVhet model will be used to examine the relationship between changes in anxiety and selected covariates. Dissemination The results of this study will be presented at a professional conference and published in a peer-reviewed journal. Trial registration number CRD42016048728. PMID:28264834
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Dynamically accumulated dose and 4D accumulated dose for moving tumors
Li, Heng; Li, Yupeng; Zhang, Xiaodong; Li, Xiaoqiang; Liu, Wei; Gillin, Michael T.; Zhu, X. Ronald
2012-01-01
Purpose: The purpose of this work was to investigate the relationship between dynamically accumulated dose (dynamic dose) and 4D accumulated dose (4D dose) for irradiation of moving tumors, and to quantify the dose uncertainty induced by tumor motion. Methods: The authors established that regardless of treatment modality and delivery properties, the dynamic dose will converge to the 4D dose, instead of the 3D static dose, after multiple deliveries. The bounds of dynamic dose, or the maximum estimation error using 4D or static dose, were established for the 4D and static doses, respectively. Numerical simulations were performed (1) to prove the principle that for each phase, after multiple deliveries, the average number of deliveries for any given time converges to the total number of fractions (K) over the number of phases (N); (2) to investigate the dose difference between the 4D and dynamic doses as a function of the number of deliveries for deliveries of a “pulsed beam”; and (3) to investigate the dose difference between 4D dose and dynamic doses as a function of delivery time for deliveries of a “continuous beam.” A Poisson model was developed to estimate the mean dose error as a function of number of deliveries or delivered time for both pulsed beam and continuous beam. Results: The numerical simulations confirmed that the number of deliveries for each phase converges to K/N, assuming a random starting phase. Simulations for the pulsed beam and continuous beam also suggested that the dose error is a strong function of the number of deliveries and/or total deliver time and could be a function of the breathing cycle, depending on the mode of delivery. The Poisson model agrees well with the simulation. Conclusions: Dynamically accumulated dose will converge to the 4D accumulated dose after multiple deliveries, regardless of treatment modality. Bounds of the dynamic dose could be determined using quantities derived from 4D doses, and the mean dose
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Finite doses are employed in experimental toxicology studies. Under the traditional methodology, the point of departure (POD) value for low dose extrapolation is identified as one of these doses. Dose spacing necessarily precludes a more accurate description of the POD value. ...
Failure-probability driven dose painting
Vogelius, Ivan R.; Håkansson, Katrin; Due, Anne K.; Aznar, Marianne C.; Kristensen, Claus A.; Rasmussen, Jacob; Specht, Lena [Department of Radiation Oncology, Rigshospitalet, University of Copenhagen, Copenhagen 2100 (Denmark); Berthelsen, Anne K. [Department of Radiation Oncology, Rigshospitalet, University of Copenhagen, Copenhagen 2100, Denmark and Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen 2100 (Denmark); Bentzen, Søren M. [Department of Radiation Oncology, Rigshospitalet, University of Copenhagen, Copenhagen 2100, Denmark and Departments of Human Oncology and Medical Physics, University of Wisconsin, Madison, Wisconsin 53792 (United States)
2013-08-15
Purpose: To demonstrate a data-driven dose-painting strategy based on the spatial distribution of recurrences in previously treated patients. The result is a quantitative way to define a dose prescription function, optimizing the predicted local control at constant treatment intensity. A dose planning study using the optimized dose prescription in 20 patients is performed.Methods: Patients treated at our center have five tumor subvolumes from the center of the tumor (PET positive volume) and out delineated. The spatial distribution of 48 failures in patients with complete clinical response after (chemo)radiation is used to derive a model for tumor control probability (TCP). The total TCP is fixed to the clinically observed 70% actuarial TCP at five years. Additionally, the authors match the distribution of failures between the five subvolumes to the observed distribution. The steepness of the dose–response is extracted from the literature and the authors assume 30% and 20% risk of subclinical involvement in the elective volumes. The result is a five-compartment dose response model matching the observed distribution of failures. The model is used to optimize the distribution of dose in individual patients, while keeping the treatment intensity constant and the maximum prescribed dose below 85 Gy.Results: The vast majority of failures occur centrally despite the small volumes of the central regions. Thus, optimizing the dose prescription yields higher doses to the central target volumes and lower doses to the elective volumes. The dose planning study shows that the modified prescription is clinically feasible. The optimized TCP is 89% (range: 82%–91%) as compared to the observed TCP of 70%.Conclusions: The observed distribution of locoregional failures was used to derive an objective, data-driven dose prescription function. The optimized dose is predicted to result in a substantial increase in local control without increasing the predicted risk of toxicity.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
An updated dose assessment for Rongelap Island
Robison, W.L.; Conrado, C.L.; Bogen, K.T.
1994-07-01
We have updated the radiological dose assessment for Rongelap Island at Rongelap Atoll using data generated from field trips to the atoll during 1986 through 1993. The data base used for this dose assessment is ten fold greater than that available for the 1982 assessment. Details of each data base are presented along with details about the methods used to calculate the dose from each exposure pathway. The doses are calculated for a resettlement date of January 1, 1995. The maximum annual effective dose is 0.26 mSv y{sup {minus}1} (26 mrem y{sup {minus}1}). The estimated 30-, 50-, and 70-y integral effective doses are 0.0059 Sv (0.59 rem), 0.0082 Sv (0.82 rem), and 0.0097 Sv (0.97 rem), respectively. More than 95% of these estimated doses are due to 137-Cesium ({sup 137}Cs). About 1.5% of the estimated dose is contributed by 90-Strontium ({sup 90}Sr), and about the same amount each by 239+240-Plutonium ({sup 239+240}PU), and 241-Americium ({sup 241}Am).
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Dose Estimation in Radiation Cytogenetics
Ainsbury, Elizabeth; Lloyd, David
2009-04-01
Throughout the radiation cytogenetics community, a core group of methods exists to produce an estimate of radiation dose from an observed yield of DNA damage in blood. Mathematical and statistical analysis is extremely important for accurate assessment of data and results, and a number of classical statistical methods are commonly employed. However, the large number of statistical techniques, and the complexity of the methods, can lead to errors in data analysis and misinterpretation of results. Cytogenetics dose estimation software has been developed to simplify mathematical and statistical analysis of cytogenetic data. ``Dose Estimate'' is a collection of mathematical and statistical methods based on the cytogenetics methods and programs written by Alan Edwards, David Papworth, and others. Details of the biological and mathematical techniques used in the software will be presented, including maximum likelihood estimation of yield curve coefficients for the dicentric or translocation assays. Proposals for increasing the sophistication of the software through implementation of recently published Bayesian analysis techniques for cytogenetics will also be outlined.
Inter- and intrafractional dose uncertainty in hypofractionated Gamma Knife radiosurgery.
Kim, Taeho; Sheehan, Jason; Schlesinger, David
2016-03-08
The purpose of this study is to evaluate inter- and intrafractional dose variations resulting from head position deviations for patients treated with the Extend relocatable frame system utilized in hypofractionated Gamma Knife radiosurgery (GKRS). While previous reports characterized the residual setup and intrafraction uncertainties of the system, the dosimetric consequences have not been investigated. A digital gauge was used to measure the head position of 16 consecutive Extend patients (62 fractions) at the time of simulation, before each fraction, and immediately following each fraction. Vector interfraction (difference between simulation and prefraction positions) and intrafraction (difference between postfraction and prefraction positions) shifts in patient position were calculated. Planned dose distributions were shifted by the offset to determine the time-of-treatment dose. Variations in mean and maximum target and organ at risk (OAR) doses as a function of positional shift were evaluated. The mean vector interfraction shift was 0.64 mm (Standard Deviation (SD): 0.25 mm, maximum: 1.17 mm). The mean intrafraction shift was 0.39 mm (SD: 0.25 mm, maximum: 1.44 mm). The mean variation in mean target dose was 0.66% (SD: 1.15%, maximum: 5.77%) for inter-fraction shifts and 0.26% (SD: 0.34%, maximum: 1.85%) for intrafraction shifts. The mean variation in maximum dose to OARs was 7.15% (SD: 5.73%, maximum: 30.59%) for interfraction shifts and 4.07% (SD: 4.22%, maximum: 17.04%) for intrafraction shifts. Linear fitting of the mean variation in maximum dose to OARs as a function of position yielded dose deviations of 10.58%/mm for interfractional shifts and 7.69%/mm for intrafractional shifts. Positional uncertainties when per-forming hypofractionated Gamma Knife radiosurgery with the Extend system are small and comparable to frame-based uncertainties (< 1 mm). However, the steep dose gradient characteristics of GKRS mean that the dosimetric consequences of
McIntosh, Chris; Welch, Mattea; McNiven, Andrea; Jaffray, David A.; Purdie, Thomas G.
2017-08-01
Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present a probabilistic, atlas-based approach which predicts the dose for novel patients using a set of automatically selected most similar patients (atlases). The output is a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces the need to specify and tune dose-volume objectives. Voxel-based dose mimicking optimization then converts the predicted dose distribution to a complete treatment plan with dose calculation using a collapsed cone convolution dose engine. In this study, we investigated automated planning for right-sided oropharaynx head and neck patients treated with IMRT and VMAT. We compare four versions of our dose prediction pipeline using a database of 54 training and 12 independent testing patients by evaluating 14 clinical dose evaluation criteria. Our preliminary results are promising and demonstrate that automated methods can generate comparable dose distributions to clinical. Overall, automated plans achieved an average of 0.6% higher dose for target coverage evaluation criteria, and 2.4% lower dose at the organs at risk criteria levels evaluated compared with clinical. There was no statistically significant difference detected in high-dose conformity between automated and clinical plans as measured by the conformation number. Automated plans achieved nine more unique criteria than clinical across the 12 patients tested and automated plans scored a significantly higher dose at the evaluation limit for two high-risk target coverage criteria and a significantly lower dose in one critical organ maximum dose. The novel dose prediction method with dose mimicking can generate complete treatment plans in 12-13 min without user interaction. It is a promising approach for fully automated treatment
Intangible assets for intangible deliverables
Elsmore, Matthew J.
2008-01-01
As the dominant economic business model in Europe, services are important when we consider intangible assets. This article argues a case for some kind of 'special relationship' between service firms and trade marks-specifically bearing in mind the CTM system and new EU services law. On the question...... if there can be constructive overlap between trade marks and services and how this emerges, the analysis shows there is reason both for and against thinking that together the relevant sets of laws, among other things, ease the transition from national- to Community-based trading for the overwhelming majority...... of EU businesses. The article suggests a starting point for a fresh yet reassuringly ordinary dialogue within trade mark law, one that asserts it a central role in realising predicted economic benefits of the Internal Market....
Intangible assets for intangible deliverables
Elsmore, Matthew J.
2008-01-01
As the dominant economic business model in Europe, services are important when we consider intangible assets. This article argues a case for some kind of 'special relationship' between service firms and trade marks-specifically bearing in mind the CTM system and new EU services law. On the questi...... of EU businesses. The article suggests a starting point for a fresh yet reassuringly ordinary dialogue within trade mark law, one that asserts it a central role in realising predicted economic benefits of the Internal Market....
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Inferring mechanisms from dose-response curves
Chow, Carson C.; Ong, Karen M.; Dougherty, Edward J.; Simons, S. Stoney
2011-01-01
The steady state dose-response curve of ligand-mediated gene induction usually appears to precisely follow a first-order Hill equation (Hill coefficient equal to 1). Additionally, various cofactors/reagents can affect both the potency and the maximum activity of gene induction in a gene-specific manner. Recently, we have developed a general theory for which an unspecified sequence of steps or reactions yields a first-order Hill dose-response curve (FHDC) for plots of the final product vs. ini...
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Yoshida, Ken; Mitomo, Masanori [Osaka National Hospital (Japan); Nose, Takayuki; Koizumi, Masahiko; Nishiyama, Kinji [Osaka Prefectural Center for Adult Diseases (Japan); Yoshida, Mineo [Sanda City Hospital, Hyogo (Japan)
2002-12-01
We employ a clinical target volume (CTV)-based dose prescription for high-dose-rate (HDR) interstitial brachytherapy. However, it is not easy to define CTV and organs at risk (OAR) from X-ray film or CT scanning. To solve this problem, we have utilized metal markers since October 1999. Moreover, metal markers can help modify dose prescription. By regulating the doses to the metal markers, refining the dose prescription can easily be achieved. In this research, we investigated the usefulness of the metal markers. Between October 1999 and May 2001, 51 patients were implanted with metal markers at Osaka Medical Center for Cancer and Cardiovascular Diseases (OMCC), Osaka National Hospital (ONH) and Sanda City Hospital (SCH). Forty-nine patients (head and neck: 32; pelvis: 11; soft tissue: 3; breast: 3) using metal markers were analyzed. During operation, we implanted 179 metal markers (49 patients) to CTV and 151 markers (26 patients) to OAR. At treatment planning, CTV was reconstructed judging from the metal markers, applicator position and operation records. Generally, we prescribed the tumoricidal dose to an isodose surface that covers CTV. We also planned to limit the doses to OAR lower than certain levels. The maximum normal tissue doses were decided 80%, 150%, 100%, 50% and 200% of the prescribed doses for the rectum, the urethra, the mandible, the skin and the large vessel, respectively. The doses to the metal markers using CTV-based dose prescription were generated. These were compared with the doses theoretically calculated with the Paris system. Treatment results were also investigated. The doses to the 158 metal markers (42 patients) for CTV were higher than ''tumoricidal dose''. In 7 patients, as a result of compromised dose prescription, 9 markers were lower than the tumoricidal dose. The other 12 markers (7%) were excluded from dose evaluation because they were judged as miss-implanted. The doses to the 142 metal markers (24 patients
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Required accuracy and dose thresholds in individual monitoring
Christensen, P.; Griffith, R.V.
1994-01-01
this uncertainty factor, a value of 21% can be evaluated for the allowable maximum overall standard deviation for dose measurements at dose levels near the annual dose limits increasing to 45% for dose levels at the lower end of the dose range required to be monitored. A method is described for evaluating...... the overall standard deviation of the dosimetry system by combining random and systematic uncertainties in quadrature, and procedures are also given for determining each individual uncertainty connected to the dose measurement. In particular, attention is paid to the evaluation of the combined uncertainty due...... to energy and angular dependencies of the dosemeter. In type testing of personal dosimetry systems, the estimated overall standard deviation of the dosimetry system is the main parameter to be tested. An important characteristic of a personal dosimetry system is its capability of measuring low doses...
Verification of cell irradiation dose deposition using a radiochromic film
Tomic, N [Department of Radiation Oncology, Jewish General Hospital, McGill University, Montreal, Quebec (Canada); Gosselin, M [Department of Radiation Oncology, Jewish General Hospital, McGill University, Montreal, Quebec (Canada); Wan, Jonathan F [Radiation Oncology Department, McGill University Health Centre, Montreal, Quebec (Canada); Saragovi, Uri [Department of Pharmacology, McGill University, Montreal, Quebec (Canada); Podgorsak, E B [Medical Physics Department, McGill University Health Center, Montreal, Quebec (Canada); Evans, M [Medical Physics Department, McGill University Health Center, Montreal, Quebec (Canada); Devic, S [Medical Physics Department, McGill University Health Center, Montreal, Quebec (Canada)
2007-06-07
We describe a technique for the MTT assay that irradiates all cells at once by a combination of couch movement and a step-and-shoot irradiation technique on a linear accelerator with 6 MV and 18 MV photon beams. In two experimental setups, we obtained maximum to minimum dose ranges of 10 for the constant MU/bin (monitor units per bin) setup and 20 for the variable MU/bin technique. The irradiation technique described is dose rate independent and it can be used on any teletherapy irradiation machine. We also employed radiochromic film dosimetry to verify dose delivered in each of the wells within the dish. It is shown that for the lowest doses, relative dose variation within wells reaches a value of 6%. We also demonstrated that the radiochromic film positioned below the 96-well plate does not underestimate dose deposited within each compartment by more than 2% due to the vertical dose gradient.
Verification of cell irradiation dose deposition using a radiochromic film
Tomic, N.; Gosselin, M.; Wan, Jonathan F.; Saragovi, Uri; Podgorsak, E. B.; Evans, M.; Devic, S.
2007-06-01
We describe a technique for the MTT assay that irradiates all cells at once by a combination of couch movement and a step-and-shoot irradiation technique on a linear accelerator with 6 MV and 18 MV photon beams. In two experimental setups, we obtained maximum to minimum dose ranges of 10 for the constant MU/bin (monitor units per bin) setup and 20 for the variable MU/bin technique. The irradiation technique described is dose rate independent and it can be used on any teletherapy irradiation machine. We also employed radiochromic film dosimetry to verify dose delivered in each of the wells within the dish. It is shown that for the lowest doses, relative dose variation within wells reaches a value of 6%. We also demonstrated that the radiochromic film positioned below the 96-well plate does not underestimate dose deposited within each compartment by more than 2% due to the vertical dose gradient.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Bradley, Dwight C.; O'Sullivan, Paul; Cosca, Michael A.; Motts, Holly; Horton, John D.; Taylor, Cliff D.; Beaudoin, Georges; Lee, Gregory K.; Ramezani, Jahan; Bradley, Daniel N.; Jones III, James V.; Bowring, Samuel
2015-01-01
This report is a companion to the new Geologic Map of Mauritania (Bradley and others, 2015; referred to herein as “Deliverable 51”) and the new Structural Geologic Map of Mauritania (Bradley and others, 2015a; referred to herein as “Deliverable 52”). Section 1 contains explanatory information for these two digital maps. Section 2 covers the analytical methods used in obtaining new U-Pb ages from 9 igneous rock samples, new detrital zircon ages from 40 sedimentary or metasedimentary rock samples, and new 40Ar/39Ar ages from 12 samples of metamorphic rocks and veins. Sections 3 through 6 present the new geochronological results, organized by region. In Section 7, we discuss implications of the new ages for the regional geology and discuss problematic results. Finally, in Section 8, we summarize the geology and tectonic evolution of Mauritania in narrative form, drawing on new and published information, in the context of global tectonics. The report is being released in both English and French. In both versions, we use the French-language names for formal stratigraphic units.
Sullivan, Terry [Brookhaven National Lab. (BNL), Upton, NY (United States)
2016-02-22
The objectives of this report are; To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; Estimate the maximum concentration in a well located outside of the fill material; and Perform a sensitivity analysis of key parameters.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Dose escalation of a curcuminoid formulation
Crowell James
2006-03-01
Full Text Available Abstract Background Curcumin is the major yellow pigment extracted from turmeric, a commonly-used spice in India and Southeast Asia that has broad anticarcinogenic and cancer chemopreventive potential. However, few systematic studies of curcumin's pharmacology and toxicology in humans have been performed. Methods A dose escalation study was conducted to determine the maximum tolerated dose and safety of a single dose of standardized powder extract, uniformly milled curcumin (C3 Complex™, Sabinsa Corporation. Healthy volunteers were administered escalating doses from 500 to 12,000 mg. Results Seven of twenty-four subjects (30% experienced only minimal toxicity that did not appear to be dose-related. No curcumin was detected in the serum of subjects administered 500, 1,000, 2,000, 4,000, 6,000 or 8,000 mg. Low levels of curcumin were detected in two subjects administered 10,000 or 12,000 mg. Conclusion The tolerance of curcumin in high single oral doses appears to be excellent. Given that achieving systemic bioavailability of curcumin or its metabolites may not be essential for colorectal cancer chemoprevention, these findings warrant further investigation for its utility as a long-term chemopreventive agent.
Han, C; Schultheiss, T [City of Hope National Medical Center, Duarte, CA (United States)
2015-06-15
Purpose: In this study, we aim to evaluate the effect of dose grid size on the accuracy of calculated dose for small lesions in intracranial stereotactic radiosurgery (SRS), and to verify dose calculation accuracy with radiochromic film dosimetry. Methods: 15 intracranial lesions from previous SRS patients were retrospectively selected for this study. The planning target volume (PTV) ranged from 0.17 to 2.3 cm{sup 3}. A commercial treatment planning system was used to generate SRS plans using the volumetric modulated arc therapy (VMAT) technique using two arc fields. Two convolution-superposition-based dose calculation algorithms (Anisotropic Analytical Algorithm and Acuros XB algorithm) were used to calculate volume dose distribution with dose grid size ranging from 1 mm to 3 mm with 0.5 mm step size. First, while the plan monitor units (MU) were kept constant, PTV dose variations were analyzed. Second, with 95% of the PTV covered by the prescription dose, variations of the plan MUs as a function of dose grid size were analyzed. Radiochomic films were used to compare the delivered dose and profile with the calculated dose distribution with different dose grid sizes. Results: The dose to the PTV, in terms of the mean dose, maximum, and minimum dose, showed steady decrease with increasing dose grid size using both algorithms. With 95% of the PTV covered by the prescription dose, the total MU increased with increasing dose grid size in most of the plans. Radiochromic film measurements showed better agreement with dose distributions calculated with 1-mm dose grid size. Conclusion: Dose grid size has significant impact on calculated dose distribution in intracranial SRS treatment planning with small target volumes. Using the default dose grid size could lead to under-estimation of delivered dose. A small dose grid size should be used to ensure calculation accuracy and agreement with QA measurements.
Sullivan, Terry [Brookhaven National Lab. (BNL), Upton, NY (United States). Biological, Environmental, and Climate Sciences Dept.
2014-12-02
ZionSolutions is in the process of decommissioning the Zion Nuclear Power Plant in order to establish a new water treatment plant. There is some residual radioactive particles from the plant which need to be brought down to levels so an individual who receives water from the new treatment plant does not receive a radioactive dose in excess of 25 mrem/y⁻¹. The objectives of this report are: (a) To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; (b) Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; (c) Estimate the maximum concentration in a well located outside of the fill material; and (d) Perform a sensitivity analysis of key parameters.
The Role of Age on Dose Limiting Toxicities (DLTs) in Phase I Dose-escalation Trials
Schwandt, A; Harris, P. J.; Hunsberger, S.; Deleporte, A.; Smith, G. L.; Vulih, D.; Anderson, B. D.; Ivy, S. P.
2016-01-01
Purpose Elderly oncology patients are not enrolled in early phase trials in proportion to the numbers of geriatric patients with cancer. There may be concern that elderly patients will not tolerate investigational agents as well as younger patients resulting in a disproportionate number of dose-limiting toxicities (DLTs). Recent single-institution studies provide conflicting data on the relationship between age and DLT. Experimental Design We retrospectively reviewed data about patients treated on single-agent, dose-escalation, phase I clinical trials sponsored by the Cancer Therapy Evaluation Program (CTEP) of the National Cancer Institute. Patients’ dose levels were described as percentage of maximum tolerated dose (%MTD), the highest dose level at which <33% of patients had a DLT, or recommended phase II dose (RP2D). Mixed-effect logistic regression models were used to analyze relationships between the probability of a DLT and age and other explanatory variables. Results Increasing dose, increasing age, and worsening performance status (PS) were significantly related to an increased probability of a DLT in this model (p<0.05). There was no association between dose level administered and age (p=0.57). Conclusions This analysis of phase I dose-escalation trials involving over 500 patients older than 70 years of age, is the largest reported. As age and dose level increased and PS worsened, the probability of a DLT increased. While increasing age was associated with occurrence of DLT, this risk remained within accepted thresholds of risk for phase I trials. There was no evidence of age bias on enrollment of patients on low or high dose levels. PMID:25028396
STUDY ON LUNG DOSE FOR DIFFERENT ANIMALS BY INHALATION OF SHORT—LIVED RADON DAUGHTERS
李素云; 张升慧; 等
1994-01-01
The dose distribution in the lung is inhomogeneous.The dose to the basal cell layer of trachea and main bronchi is much higher than the dose to total lung both for rabbits at different ages and for different animals.A maximum value of the dose to lung tissue for rabbits at ages of 20-40d is observed.The dose decreases with increasing body weight.The relationship between the dose and body weight can be descreibed by a power function.The dose to total lung increases exponentially with the minute breathing volume per unit of lung weight.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Suresh Rana
2014-12-01
Full Text Available Purpose: Acuros XB (AXB dose calculation algorithm is available for external beam photon dose calculations in Eclipse treatment planning system (TPS. The AXB can report the absorbed dose in two modes: dose-to-water (Dw and dose-to-medium (Dm. The main purpose of this study was to compare the dosimetric results of the AXB_Dm with that of AXB_Dw on real patient treatment plans. Methods: Four groups of patients (prostate cancer, stereotactic body radiation therapy (SBRT lung cancer, left breast cancer, and right breast cancer were selected for this study, and each group consisted of 5 cases. The treatment plans of all cases were generated in the Eclipse TPS. For each case, treatment plans were computed using AXB_Dw and AXB_Dm for identical beam arrangements. Dosimetric evaluation was done by comparing various dosimetric parameters in the AXB_Dw plans with that of AXB_Dm plans for the corresponding patient case. Results: For the prostate cancer, the mean planning target volume (PTV dose in the AXB_Dw plans was higher by up to 1.0%, but the mean PTV dose was within ±0.3% for the SBRT lung cancer. The analysis of organs at risk (OAR results in the prostate cancer showed that AXB_Dw plans consistently produced higher values for the bladder and femoral heads but not for the rectum. In the case of SBRT lung cancer, a clear trend was seen for the heart mean dose and spinal cord maximum dose, with AXB_Dw plans producing higher values than the AXB_Dm plans. However, the difference in the lung doses between the AXB_Dm and AXB_Dw plans did not always produce a clear trend, with difference ranged from -1.4% to 2.9%. For both the left and right breast cancer, the AXB_Dm plans produced higher maximum dose to the PTV for all cases. The evaluation of the maximum dose to the skin showed higher values in the AXB_Dm plans for all 5 left breast cancer cases, whereas only 2 cases had higher maximum dose to the skin in the AXB_Dm plans for the right breast cancer
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Controllable dose; Dosis controlable
Alvarez R, J.T.; Anaya M, R.A. [ININ, A.P. 18-1027, 11801 Mexico D.F. (Mexico)]. E-mail: jtar@nuclear.inin.mx
2004-07-01
With the purpose of eliminating the controversy about the lineal hypothesis without threshold which found the systems of dose limitation of the recommendations of ICRP 26 and 60, at the end of last decade R. Clarke president of the ICRP proposed the concept of Controllable Dose: as the dose or dose sum that an individual receives from a particular source which can be reasonably controllable by means of any means; said concept proposes a change in the philosophy of the radiological protection of its concern by social approaches to an individual focus. In this work a panorama of the foundations is presented, convenient and inconveniences that this proposal has loosened in the international community of the radiological protection, with the purpose of to familiarize to our Mexican community in radiological protection with these new concepts. (Author)
Hogden, J.
1996-11-05
The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.
Neutron dose in and out of 18MV photon fields.
Ezzati, A O; Studenski, M T
2017-04-01
In radiation therapy, neutron contamination is an undesirable side effect of using high energy photons to treat patients. Neutron contamination requires adjustments to the shielding requirements of the linear accelerator vault and contributes to the risk of secondary malignancies in patients by delivering dose outside of the primary treatment field. Using MCNPX, an established Monte Carlo code, manufacturer blueprints, and the most up to date ICRP neutron dose conversion factors, the neutron spectra, neutron/photon dose ratio, and the neutron capture gamma ray dose were calculated at different depths and off axis distances in a tissue equivalent phantom. Results demonstrated that the neutron spectra and dose are dependent on field size, depth in the phantom, and off-axis distance. Simulations showed that because of the low neutron absorption cross section of the linear accelerator head materials, the contribution to overall patient dose from neutrons can be up to 1000 times the photon dose out of the treatment field and is also dependent on field size and depth. Beyond 45cm off-axis, the dependence of the neutron dose on field size is minimal. Neutron capture gamma ray dose is also field size dependent and is at a maximum at a depth of about 7cm. It is important to remember that when treating with high energy photons, the dose from contamination neutrons must be considered as it is much greater than the photon dose.
Electron dose rate and photon contamination in electron arc therapy
Pla, M.; Podgorsak, E.B.; Pla, C. (McGill Univ., Montreal, Quebec (Canada))
1989-09-01
The electron dose rate at the depth of dose maximum dmax and the photon contamination are discussed as a function of several parameters of the rotational electron beam. A pseudoarc technique with an angular increment of 10 degrees and a constant number of monitor units per each stationary electron field was used in our experiments. The electron dose rate is defined as the electron dose at a given point in phantom divided by the number of monitor units given for any one stationary electron beam. For a given depth of isocenter di the electron dose rates at dmax are linearly dependent on the nominal field width w, while for a given w the dose rates are inversely proportional to di. The dose rates for rotational electron beams with different di are related through the inverse square law provided that the two beams have (di,w) combinations which give the same characteristic angle beta. The photon dose at the isocenter depends on the arc angle alpha, field width w, and isocenter depth di. For constant w and di the photon dose at isocenter is proportional to alpha, for constant alpha and w it is proportional to di, and for constant alpha and di it is inversely proportional to w. The w and di dependence implies that for the same alpha the photon dose at the isocenter is inversely proportional to the electron dose rate at dmax.
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
C.M.L. Herpen, C.M.L. (Carla); F.A.L.M. Eskens (Ferry); M.J.A. de Jonge (Maja); I. Desar; L. Hooftman (Leon); E. Bone (Elisabeth); J.N.H. Timmerbonte (Johanna); J. Verweij (Jaap)
2010-01-01
textabstractBackground: This Phase Ib dose-escalating study investigated safety, maximum tolerated dose (MTD), dose-limiting toxicity (DLT), pharmacokinetics (PK) and clinical antitumour activity of tosedostat (CHR-2797), an orally bioavailable aminopeptidase inhibitor, in combination with
C.M.L. Herpen, C.M.L. (Carla); F.A.L.M. Eskens (Ferry); M.J.A. de Jonge (Maja); I. Desar; L. Hooftman (Leon); E. Bone (Elisabeth); J.N.H. Timmerbonte (Johanna); J. Verweij (Jaap)
2010-01-01
textabstractBackground: This Phase Ib dose-escalating study investigated safety, maximum tolerated dose (MTD), dose-limiting toxicity (DLT), pharmacokinetics (PK) and clinical antitumour activity of tosedostat (CHR-2797), an orally bioavailable aminopeptidase inhibitor, in combination with paclitaxe
Mente, Scot; Doran, Angela; Wager, Travis T
2012-06-14
The objective of this work was to establish that unbound maximum concentrations may be reasonably predicted from a combination of computed molecular properties assuming subcutaneous (SQ) dosing. Additionally, we show that the maximum unbound plasma and brain concentrations may be projected from a mixture of in vitro absorption, distribution, metabolism, excretion experimental parameters in combination with computed properties (volume of distribution, fraction unbound in microsomes). Finally, we demonstrate the utility of the underlying equations by showing that the maximum total plasma concentrations can be projected from the experimental parameters for a set of compounds with data collected from clinical research.
The ENVISION Collaboration
2014-01-01
Deliverable 6.2 - Software: upgraded MC simulation tools capable of simulating a complete in-beam ET experiment, from the beam to the detected events. Report with the description of one (or few) reference clinical case(s), including the complete patient model and beam characteristics
Mourik, R.M.; Backhaus, J.; Feenstra, C.F.J.; Breukers, S. [ECN Policy Studies, Petten (Netherlands); Heiskanen, E.; Rask, M.; Saastamoinen, M.; Johnson, M. [National Consumer Research Centre NCRC, Helsinki (Finland); Anttonen, M. [Helsinki School of Economics, Helsinki (Finland); Barabanova, Y.; Pariag, J. [Central European University CEU, Budapest (Hungary); Bauknecht, D.; Bern, M.R.; Brohmann, B.; Buerger, V. [Institute for Applied Ecology OEKO, Freiburg (Germany); Hodson, M.; Liang, V.; Marvin, S. [The SURF Centre, University of Salford, Manchester (United Kingdom); Jalas, M.; Rinne, S.; Salminnen, J. [Enespa Ltd. (Finland); Kallaste, T. [Stockholm Environment Institute SEI, Tallinn Centre SEI-T, Tallinn (Estonia); Kamenders, A. [Ekodoma Ltd, Riga (Latvia); Malamatenios, C.; Papandreou, V. [Centre for Renewable Energy Sources CRES, Pikermi Attiki (Greece); Maier, P.; Meinel, H. [Verbraucherzentrale Nordrhein-Westfalen e.V. VZ NRW, Duesseldorf (Germany); Robinson, S. [Manchester Knowledge Capital MKC, Manchester Enterprises ME, Manchester (United Kingdom); Valuntiene, I. [Cowi Baltic, Vilnius (Lithuania); Vadovics, E. [GreenDependent Sustainable Solutions Association, Magyarorszag (Hungary)
2009-10-15
Changing Behaviour is a project that aims to support change in energy use and energy services by applying social research on technological change to practical use. The focus is on the interaction between energy experts and energy users: How can these different groups learn to understand each other better. Demand-side programmes have exhibited a range of more and less successful results, but the reasons for success or failure are not fully understood. Deliverable 4 presents a meta-analysis of 27 case studies from various EU countries. It makes an in-depth analysis of causes for success and failure, with a special focus on the role of context, timing and actors.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
A real time dose monitoring and dose reconstruction tool for patient specific VMAT QA and delivery
Tyagi, Neelam; Yang Kai; Gersten, David; Yan Di [Department of Radiation Oncology, William Beaumont Hospital, 3601 West Thirteen Mile Road, Royal Oak, Michigan 48073 (United States)
2012-12-15
Purpose: To develop a real time dose monitoring and dose reconstruction tool to identify and quantify sources of errors during patient specific volumetric modulated arc therapy (VMAT) delivery and quality assurance. Methods: The authors develop a VMAT delivery monitor tool called linac data monitor that connects to the linac in clinical mode and records, displays, and compares real time machine parameters with the planned parameters. A new measure, called integral error, keeps a running total of leaf overshoot and undershoot errors in each leaf pair, multiplied by leaf width, and the amount of time during which the error exists in monitor unit delivery. Another tool reconstructs Pinnacle{sup 3} Trade-Mark-Sign format delivered plan based on the saved machine logfile and recalculates actual delivered dose in patient anatomy. Delivery characteristics of various standard fractionation and stereotactic body radiation therapy (SBRT) VMAT plans delivered on Elekta Axesse and Synergy linacs were quantified. Results: The MLC and gantry errors for all the treatment sites were 0.00 {+-} 0.59 mm and 0.05 {+-} 0.31 Degree-Sign , indicating a good MLC gain calibration. Standard fractionation plans had a larger gantry error than SBRT plans due to frequent dose rate changes. On average, the MLC errors were negligible but larger errors of up to 6 mm and 2.5 Degree-Sign were seen when dose rate varied frequently. Large gantry errors occurred during the acceleration and deceleration process, and correlated well with MLC errors (r= 0.858, p= 0.0004). PTV mean, minimum, and maximum dose discrepancies were 0.87 {+-} 0.21%, 0.99 {+-} 0.59%, and 1.18 {+-} 0.52%, respectively. The organs at risk (OAR) doses were within 2.5%, except some OARs that showed up to 5.6% discrepancy in maximum dose. Real time displayed normalized total positive integral error (normalized to the total monitor units) correlated linearly with MLC (r= 0.9279, p < 0.001) and gantry errors (r= 0.742, p= 0.005). There
Schade, Wolfgang; Jochem, Eberhard; Barker, Terry (and others)
2009-07-31
ADAM research identifies and appraises existing and new policy options that can contribute to different combinations of adaptation and mitigation strategies. These options address the demands a changing climate will place on protecting citizens and valuable ecosystems - i.e., adaptation - as well as addressing the necessity to restrain/control humankind's perturbation to global climate to a desirable level - i.e., mitigation. The work package Mitigation 1 (Ml) has the core objective to simulate mitigation options and their related costs for Europe until 2050 and 2100 respectively. The focus of this deliverable is on the period 2005 to 2050. The long-term period until 2100 is covered in the previous deliverable D2, applying the POLES model for this time horizon. The analysis constitutes basically a techno-economic analysis. Depending on the sector analyzed it is either directly combined with a policy analysis (e.g. in the transport sector, renewables sector) or the policy analysis is performed qualitatively as a subsequent and independent step after the techno-economic analysis is completed (e.g. in the residential and service sectors). The book includes the following chapters: scenarios and macroeconomic assumptions; methodological issues analyzing mitigation options; the integrated global energy model POLES and its projections for the reference and 2 deg C scenarios; forest and basic materials sector; residential sector in Europe; the service (tertiary) and the primary sectors in Europe; basic products and other manufacturing industry sectors; transport sectors in Europe; renewable sector in Europe; conversion sector in Europe; syntheses and sectoral analysis in Europe; macroeconomic impacts of climate policy in the EU; the effects of the financial crisis on baseline simulations with implications for climate policy modeling: an analysis using the global model E3MG 2008-2012; conclusions and policy recommendations.
The effect of dose heterogeneity on radiation risk in medical imaging.
Samei, Ehsan; Li, Xiang; Chen, Baiyu; Reiman, Robert
2013-06-01
The current estimations of risk associated with medical imaging procedures rely on assessing the organ dose via direct measurements or simulation. The dose to each organ is assumed to be homogeneous. To take into account the differences in radiation sensitivities, the mean organ doses are weighted by a corresponding tissue-weighting coefficients provided by ICRP to calculate the effective dose, which has been used as a surrogate of radiation risk. However, those coefficients were derived under the assumption of a homogeneous dose distribution within each organ. That assumption is significantly violated in most medical-imaging procedures. In helical chest CT, for example, superficial organs (e.g. breasts) demonstrate a heterogeneous dose distribution, whereas organs on the peripheries of the irradiation field (e.g. liver) might possess a discontinuous dose profile. Projection radiography and mammography involve an even higher level of organ dose heterogeneity spanning up to two orders of magnitude. As such, mean dose or point measured dose values do not reflect the maximum energy deposited per unit volume of the organ. In this paper, the magnitude of the dose heterogeneity in both CT and projection X-ray imaging was reported, using Monte Carlo methods. The lung dose demonstrated factors of 1.7 and 2.2 difference between the mean and maximum dose for chest CT and radiography, respectively. The corresponding values for the liver were 1.9 and 3.5. For mammography and breast tomosynthesis, the difference between mean glandular dose and maximum glandular dose was 3.1. Risk models based on the mean dose were found to provide a reasonable reflection of cancer risk. However, for leukaemia, they were found to significantly under-represent the risk when the organ dose distribution is heterogeneous. A systematic study is needed to develop a risk model for heterogeneous dose distributions.
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
Appel-Dingemanse, S; Hirschberg, Y; Osborne, S; Pommier, F; McLeod, J
2001-03-01
To evaluate the steady-state pharmacokinetics (PK) and dose proportionality of the selective 5-HT4 receptor partial agonist tegaserod (HTF 919) in healthy subjects. Eighteen subjects were given 2, 6, or 12-mg doses of tegaserod twice daily (b.i.d.) for 5 days, with PK and safety assessments made during the 12 h or 24 h following first administration, and 12 h after the final dose. Tegaserod was rapidly absorbed [time to reach measured maximum plasma concentration after multiple administrations (tmax,ss) 1 h]. Steady-state PK were consistent with single-dose PK characteristics supporting that there was no accumulation of tegaserod in plasma based on systemic exposure. Mean measured maximum plasma concentration after multiple administrations (Cmax,ss) and area under the plasma concentration-time curve over one dosing interval (tau, 0-12 h after drug administration, AUC tau) were between 0.7 +/- 0.3 ng/ml and 5.6 +/- 2.9 ng/ml and 2.4 +/- 1.3 h.ng/ml and 20.4 +/- 14.0 h.ng/ml, respectively, indicating dose-proportional PK of tegaserod in the range 2-12 mg b.i.d. Tegaserod was safe and well tolerated. No serious adverse events were reported. Tegaserod exhibits no accumulation and dose-proportional PK after multiple doses.
Dose painting based on tumor uptake of Cu-ATSM and FDG
Clausen, Malene Martini; Hansen, Anders Elias; Lundemann, Michael;
2014-01-01
-volumes dose escalation were defined by a threshold-based method for both tracers and five dose escalation levels were in each sub-volume. Volumetric modulated arc therapy plans were optimized based on the dose escalation regions each scan for a total of three dose plans for each dog. The prescription dose...... for the GTV was 45 Gy (100%) and it was linearly escalated to a maximum of 150%. The correlations between dose painting plans were analyzed with of dose distribution density maps and quality volume histograms (QVH). Correlation between high-dose regions was investigated with Dice correlation coefficients...... definitions based on FDG, 64Cu-ATSM 3 h and 24 h uptake in canine tumors had different localization of the regional dose escalation levels. This indicates that 64Cu-ATSM at two different time-points and FDG provide different biological information that has to be taken into account when using the dose painting...
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
PABLM; accumulated environment radiation dose. [UNIVAC1100; FORTRAN
Fowler, T.B.; Tobias, M.L.; Fox, J.N.; Lawler, B.E.; Koppel, J.U.; Triplett, J.R.; Lynn, L.L.; Waldman, L.A.; Goldberg, I.; Greebler, P.; Kelley, M.D.; Davis, R.A.; Keck, C.E.; Redfield, J.A.; Murphy,; Soldat, J.K.
PABLM calculates internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides after the releases have ended. Radioactive decay is considered during the release, after deposition, and during holdup of food after harvest. The radiation dose models consider exposure to radionuclides deposited on the ground or crops from contaminated air or irrigation water, radionuclides in contaminated drinking water, aquatic foods raised in contaminated water, and radionuclides in bodies of water and sediments where people might fish, boat, or swim. For vegetation, the radiation dose model considers both direct deposition and uptake through roots. Doses may be calculated for either a maximum-exposed individual or for a population group. The program is designed to calculate accumulated radiation doses from the chronic ingestion of food products that contain radionuclides and doses from the external exposure to radionuclides in the environment. A first-year committed dose is calculated as well as an integrated dose for a selected number of years.UNIVAC1100; FORTRAN; EXEC8; 80,000 words of memory are required to execute the PABLM program.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
Rahola, T; Falk, R; Isaksson, M; Skuterud, L
2002-01-01
There is a definite need for training in dose calculation. Our first course was successful and was followed by a second, both courses were fully booked. An example of new tools for software products for bioassay analysis and internal dose assessment is the Integrated Modules for Bioassay Analysis (IMBA) were demonstrated at the second course. This suite of quality assured code modules have been adopted in the UK as the standard for regulatory assessment purposes. The intercomparison measurements are an important part of the Quality Assurance work. In what is known as the sup O utside workers ' directive it is stated that the internal dose measurements shall be included in the European Unions supervision system for radiation protection. The emergency preparedness regarding internal contamination was much improved by the training with and calibration of handheld instruments from participants' laboratories. More improvement will be gained with the handbook giving practical instructions on what to do in case of e...
From total empiricism to a rational design of metronomic chemotherapy phase I dosing trials.
Lam, Thomas; Hetherington, John W; Greenman, John; Maraveyas, Anthony
2006-02-01
'Metronomic chemotherapy' represents a novel anti-angiogenic strategy whereby low-dose chemotherapy is utilized in a continuous fashion in order to target tumor endothelium. There are many potential advantages of this strategy and clinical trials are already underway. However, although the scheduling of metronomic chemotherapy is relatively unequivocal, metronomic dosing principles are at present poorly defined. Arbitrarily, 10-33% of the maximum tolerated dose comprises 'the dose range'. We argue that this is too empirical and propose a set of phase I metronomic chemotherapy dosing strategies based on a principled approach which may help to reduce the problem of empiricism in dosing for metronomic chemotherapy trials.
WAGGONER, L.O.
2000-05-16
As radiation safety specialists, one of the things we are required to do is evaluate tools, equipment, materials and work practices and decide whether the use of these products or work practices will reduce radiation dose or risk to the environment. There is a tendency for many workers that work with radioactive material to accomplish radiological work the same way they have always done it rather than look for new technology or change their work practices. New technology is being developed all the time that can make radiological work easier and result in less radiation dose to the worker or reduce the possibility that contamination will be spread to the environment. As we discuss the various tools and techniques that reduce radiation dose, keep in mind that the radiological controls should be reasonable. We can not always get the dose to zero, so we must try to accomplish the work efficiently and cost-effectively. There are times we may have to accept there is only so much you can do. The goal is to do the smart things that protect the worker but do not hinder him while the task is being accomplished. In addition, we should not demand that large amounts of money be spent for equipment that has marginal value in order to save a few millirem. We have broken the handout into sections that should simplify the presentation. Time, distance, shielding, and source reduction are methods used to reduce dose and are covered in Part I on work execution. We then look at operational considerations, radiological design parameters, and discuss the characteristics of personnel who deal with ALARA. This handout should give you an overview of what it takes to have an effective dose reduction program.
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
Waggoner, L O
2000-01-01
As radiation safety specialists, one of the things we are required to do is evaluate tools, equipment, materials and work practices and decide whether the use of these products or work practices will reduce radiation dose or risk to the environment. There is a tendency for many workers that work with radioactive material to accomplish radiological work the same way they have always done it rather than look for new technology or change their work practices. New technology is being developed all the time that can make radiological work easier and result in less radiation dose to the worker or reduce the possibility that contamination will be spread to the environment. As we discuss the various tools and techniques that reduce radiation dose, keep in mind that the radiological controls should be reasonable. We can not always get the dose to zero, so we must try to accomplish the work efficiently and cost-effectively. There are times we may have to accept there is only so much you can do. The goal is to do the sm...
National Programme of Immunization (NPI), measles remains a disturbing cause ... or as a supplement is expected to offer a second opportunity to children who ... available in 1963, the world welcomed it with joy .... one dose of vaccine were not always protected from .... begins a long story Starting now is still early enough.
Gh Bagheri
2011-09-01
Full Text Available Determination of eye absorbed dose during head & neck radiotherapy is essential to estimate the risk of cataract. Dose measurements were made in 20 head & neck cancer patients undergoing 60Co radiotherapy using LiF(MCP thermoluminescent dosimeters. Head & neck cancer radiotherapy was delivered by fields using SAD & SSD techniques. For each patient, 3 TLD chips were placed on each eye. Head & neck dose was about 700-6000 cGy in 8-28 equal fractions. The range of eye dose is estimated to be (3.49-639.1 mGy with a mean of maximum dose (98.114 mGy, which is about 3 % of head & neck dose. Maximum eye dose was observed for distsnces of about 3 cm from edge of the field to eye.
Calculation of the dose caused by internal radiation
NONE
2000-07-01
For the purposes of monitoring radiation exposure it is necessary to determine or to estimate the dose caused by both external and internal radiation. When comparing the value of exposure to the dose limits, account must be taken of the total dose incurred from different sources. This guide explains how to calculate the committed effective dose caused by internal radiation and gives the conversion factors required for the calculation. Application of the maximum values for radiation exposure is dealt with in ST guide 7.2, which also sets out the definitions of the quantities and concepts most commonly used in the monitoring of radiation exposure. The monitoring of exposure and recording of doses are dealt with in ST Guides 7.1 and 7.4.
Present and Last Glacial Maximum climates as states of maximum entropy production
Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere
2011-01-01
The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...
SU-E-T-169: Characterization of Pacemaker/ICD Dose in SAVI HDR Brachytherapy
Kalavagunta, C; Lasio, G; Yi, B; Zhou, J; Lin, M [Univ. of Maryland School Of Medicine, Baltimore, MD (United States)
2015-06-15
Purpose: It is important to estimate dose to pacemaker (PM)/Implantable Cardioverter Defibrillator (ICD) before undertaking Accelerated Partial Breast Treatment using High Dose Rate (HDR) brachytherapy. Kim et al. have reported HDR PM/ICD dose using a single-source balloon applicator. To the authors knowledge, there have so far not been any published PM/ICD dosimetry literature for the Strut Adjusted Volume Implant (SAVI, Cianna Medical, Aliso Viejo, CA). This study aims to fill this gap by generating a dose look up table (LUT) to predict maximum dose to the PM/ICD in SAVI HDR brachytherapy. Methods: CT scans for 3D dosimetric planning were acquired for four SAVI applicators (6−1-mini, 6−1, 8−1 and 10−1) expanded to their maximum diameter in air. The CT datasets were imported into the Elekta Oncentra TPS for planning and each applicator was digitized in a multiplanar reconstruction window. A dose of 340 cGy was prescribed to the surface of a 1 cm expansion of the SAVI applicator cavity. Cartesian coordinates of the digitized applicator were determined in the treatment leading to the generation of a dose distribution and corresponding distance-dose prediction look up table (LUT) for distances from 2 to 15 cm (6-mini) and 2 to 20 cm (10–1).The deviation between the LUT doses and the dose to the cardiac device in a clinical case was evaluated. Results: Distance-dose look up table were compared to clinical SAVI plan and the discrepancy between the max dose predicted by the LUT and the clinical plan was found to be in the range (−0.44%, 0.74%) of the prescription dose. Conclusion: The distance-dose look up tables for SAVI applicators can be used to estimate the maximum dose to the ICD/PM, with a potential usefulness for quick assessment of dose to the cardiac device prior to applicator placement.
Gamma Knife radiosurgery with CT image-based dose calculation.
Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Niranjan, Ajay; Kondziolka, Douglas; Flickinger, John; Lunsford, L Dade; Huq, M Saiful
2015-11-08
The Leksell GammaPlan software version 10 introduces a CT image-based segmentation tool for automatic skull definition and a convolution dose calculation algorithm for tissue inhomogeneity correction. The purpose of this work was to evaluate the impact of these new approaches on routine clinical Gamma Knife treatment planning. Sixty-five patients who underwent CT image-guided Gamma Knife radiosurgeries at the University of Pittsburgh Medical Center in recent years were retrospectively investigated. The diagnoses for these cases include trigeminal neuralgia, meningioma, acoustic neuroma, AVM, glioma, and benign and metastatic brain tumors. Dose calculations were performed for each patient with the same dose prescriptions and the same shot arrangements using three different approaches: 1) TMR 10 dose calculation with imaging skull definition; 2) convolution dose calculation with imaging skull definition; 3) TMR 10 dose calculation with conventional measurement-based skull definition. For each treatment matrix, the total treatment time, the target coverage index, the selectivity index, the gradient index, and a set of dose statistics parameters were compared between the three calculations. The dose statistics parameters investigated include the prescription isodose volume, the 12 Gy isodose volume, the minimum, maximum and mean doses on the treatment targets, and the critical structures under consideration. The difference between the convolution and the TMR 10 dose calculations for the 104 treatment matrices were found to vary with the patient anatomy, location of the treatment shots, and the tissue inhomogeneities around the treatment target. An average difference of 8.4% was observed for the total treatment times between the convolution and the TMR algorithms. The maximum differences in the treatment times, the prescription isodose volumes, the 12 Gy isodose volumes, the target coverage indices, the selectivity indices, and the gradient indices from the convolution
Effects of proton radiation dose, dose rate and dose fractionation on hematopoietic cells in mice
Ware, J.H.; Rusek, A.; Sanzari, J.; Avery, S.; Sayers, C.; Krigsfeld, G.; Nuth, M.; Wan, X.S.; Kennedy, A.R.
2010-09-01
The present study evaluated the acute effects of radiation dose, dose rate and fractionation as well as the energy of protons in hematopoietic cells of irradiated mice. The mice were irradiated with a single dose of 51.24 MeV protons at a dose of 2 Gy and a dose rate of 0.05-0.07 Gy/min or 1 GeV protons at doses of 0.1, 0.2, 0.5, 1, 1.5 and 2 Gy delivered in a single dose at dose rates of 0.05 or 0.5 Gy/min or in five daily dose fractions at a dose rate of 0.05 Gy/min. Sham-irradiated animals were used as controls. The results demonstrate a dose-dependent loss of white blood cells (WBCs) and lymphocytes by up to 61% and 72%, respectively, in mice irradiated with protons at doses up to 2 Gy. The results also demonstrate that the dose rate, fractionation pattern and energy of the proton radiation did not have significant effects on WBC and lymphocyte counts in the irradiated animals. These results suggest that the acute effects of proton radiation on WBC and lymphocyte counts are determined mainly by the radiation dose, with very little contribution from the dose rate (over the range of dose rates evaluated), fractionation and energy of the protons.
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
Menzel, H G
2012-01-01
Practical implementation of the International Commission on Radiological Protection's (ICRP) system of protection requires the availability of appropriate methods and data. The work of Committee 2 is concerned with the development of reference data and methods for the assessment of internal and external radiation exposure of workers and members of the public. This involves the development of reference biokinetic and dosimetric models, reference anatomical models of the human body, and reference anatomical and physiological data. Following ICRP's 2007 Recommendations, Committee 2 has focused on the provision of new reference dose coefficients for external and internal exposure. As well as specifying changes to the radiation and tissue weighting factors used in the calculation of protection quantities, the 2007 Recommendations introduced the use of reference anatomical phantoms based on medical imaging data, requiring explicit sex averaging of male and female organ-equivalent doses in the calculation of effecti...
Entrance surface dose according to dose calculation: Head and wrist
Sung, Ho Jin [Dept. Radiology, Chonnam National University Hospital, Gwangju (Korea, Republic of); Han, Jae Bok; Song, Jong Nam; Choi, Nam Gil [Dept. of Radiological Science, Dongshin University, Naju (Korea, Republic of)
2016-09-15
This study were compared with the direct measurement and indirect dose methods through various dose calculation in head and wrist. And, the modified equation was proposed considering equipment type, setting conditions, tube voltage, inherent filter, added filter and its accompanied back scatter factor. As a result, it decreased the error of the direct measurement than the existing dose calculation. Accordingly, diagnostic radiography patient dose comparison would become easier and radiographic exposure control and evaluation will become more efficient. The study findings are expected to be useful in patients' effective dose rate evaluation and dose reduction.
Clinical implementation and evaluation of the Acuros dose calculation algorithm.
Yan, Chenyu; Combine, Anthony G; Bednarz, Greg; Lalonde, Ronald J; Hu, Bin; Dickens, Kathy; Wynn, Raymond; Pavord, Daniel C; Saiful Huq, M
2017-08-20
computation time for other plans will be discussed at the end. Maximum difference between dose calculated by AAA and dose-to-medium by Acuros XB (Acuros_Dm,m ) was 4.3% on patient plans at the isocenter, and maximum difference between D100 calculated by AAA and by Acuros_Dm,m was 11.3%. When calculating the maximum dose to spinal cord on patient plans, differences between dose calculated by AAA and Acuros_Dm,m were more than 3%. Compared with AAA, Acuros XB improves accuracy in the presence of inhomogeneity, and also significantly reduces computation time for VMAT plans. Dose differences between AAA and Acuros_Dw,m were generally less than the dose differences between AAA and Acuros_Dm,m . Clinical practitioners should consider making Acuros XB available in clinics, however, further investigation and clarification is needed about which dose reporting mode (dose-to-water or dose-to-medium) should be used in clinics. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
2011-01-01
Du er blevet ansat som læge i et lægemiddelfirma med ansvar for planlægning og sikkerhed i fase 1 forsøg. Firmaet har udviklet tre dopamin D2-receptor antagonister til behandling af skizofreni. Lægemidlerne har undergået et omfattende farmakologisk, toksikologisk og farmaceutisk afprøvningsprogra...... fase 1 forsøg alias »First dose in man«....
An Interval Maximum Entropy Method for Quadratic Programming Problem
RUI Wen-juan; CAO De-xin; SONG Xie-wu
2005-01-01
With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.
Integer Programming Model for Maximum Clique in Graph
YUAN Xi-bo; YANG You; ZENG Xin-hai
2005-01-01
The maximum clique or maximum independent set of graph is a classical problem in graph theory. Combined with Boolean algebra and integer programming, two integer programming models for maximum clique problem,which improve the old results were designed in this paper. Then, the programming model for maximum independent set is a corollary of the main results. These two models can be easily applied to computer algorithm and software, and suitable for graphs of any scale. Finally the models are presented as Lingo algorithms, verified and compared by several examples.
Counterexamples to convergence theorem of maximum-entropy clustering algorithm
于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生
2003-01-01
In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.
Estimation of the Dose and Dose Rate Effectiveness Factor
Chappell, L.; Cucinotta, F. A.
2013-01-01
Current models to estimate radiation risk use the Life Span Study (LSS) cohort that received high doses and high dose rates of radiation. Transferring risks from these high dose rates to the low doses and dose rates received by astronauts in space is a source of uncertainty in our risk calculations. The solid cancer models recommended by BEIR VII [1], UNSCEAR [2], and Preston et al [3] is fitted adequately by a linear dose response model, which implies that low doses and dose rates would be estimated the same as high doses and dose rates. However animal and cell experiments imply there should be curvature in the dose response curve for tumor induction. Furthermore animal experiments that directly compare acute to chronic exposures show lower increases in tumor induction than acute exposures. A dose and dose rate effectiveness factor (DDREF) has been estimated and applied to transfer risks from the high doses and dose rates of the LSS cohort to low doses and dose rates such as from missions in space. The BEIR VII committee [1] combined DDREF estimates using the LSS cohort and animal experiments using Bayesian methods for their recommendation for a DDREF value of 1.5 with uncertainty. We reexamined the animal data considered by BEIR VII and included more animal data and human chromosome aberration data to improve the estimate for DDREF. Several experiments chosen by BEIR VII were deemed inappropriate for application to human risk models of solid cancer risk. Animal tumor experiments performed by Ullrich et al [4], Alpen et al [5], and Grahn et al [6] were analyzed to estimate the DDREF. Human chromosome aberration experiments performed on a sample of astronauts within NASA were also available to estimate the DDREF. The LSS cohort results reported by BEIR VII were combined with the new radiobiology results using Bayesian methods.
Dose reduction in evacuation proctography
Hare, C.; Halligan, S.; Bartram, C.I.; Gupta, R.; Walker, A.E.; Renfrew, I. [Intestinal Imaging Centre, St. Mark' s Hospital, London (United Kingdom)
2001-03-01
The goal of this study was to reduce the patient radiation dose from evacuation proctography. Ninety-eight consecutive adult patients referred for proctography to investigate difficult rectal evacuation were studied using a digital imaging system with either a standard digital program for barium examinations, a reduced dose digital program (both with and without additional copper filtration), or Video fluoroscopy. Dose-area products were recorded for each examination and the groups were compared. All four protocols produced technically acceptable examinations. The low-dose program with copper filtration (median dose 382 cGy cm{sup 2}) and Video fluoroscopy (median dose 705 cGy cm{sup 2}) were associated with significantly less dose than other groups (p < 0.0001). Patient dose during evacuation proctography can be reduced significantly without compromising the diagnostic quality of the examination. A digital program with added copper filtration conveyed the lowest dose. (orig.)
Russo, James K. [Department of Radiation Oncology, Hollings Cancer Center, Medical University of South Carolina, Charleston, South Carolina (United States); Armeson, Kent E. [Division of Biostatistics and Epidemiology, Hollings Cancer Center, Medical University of South Carolina, Charleston, South Carolina (United States); Richardson, Susan, E-mail: srichardson@radonc.wustl.edu [Department of Radiation Oncology, Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, Missouri (United States)
2012-05-01
Purpose: To evaluate bladder and rectal doses using two-dimensional (2D) and 3D treatment planning for vaginal cuff high-dose rate (HDR) in endometrial cancer. Methods and Materials: Ninety-one consecutive patients treated between 2000 and 2007 were evaluated. Seventy-one and 20 patients underwent 2D and 3D planning, respectively. Each patient received six fractions prescribed at 0.5 cm to the superior 3 cm of the vagina. International Commission on Radiation Units and Measurements (ICRU) doses were calculated for 2D patients. Maximum and 2-cc doses were calculated for 3D patients. Organ doses were normalized to prescription dose. Results: Bladder maximum doses were 178% of ICRU doses (p < 0.0001). Two-cubic centimeter doses were no different than ICRU doses (p = 0.22). Two-cubic centimeter doses were 59% of maximum doses (p < 0.0001). Rectal maximum doses were 137% of ICRU doses (p < 0.0001). Two-cubic centimeter doses were 87% of ICRU doses (p < 0.0001). Two-cubic centimeter doses were 64% of maximum doses (p < 0.0001). Using the first 1, 2, 3, 4 or 5 fractions, we predicted the final bladder dose to within 10% for 44%, 59%, 83%, 82%, and 89% of patients by using the ICRU dose, and for 45%, 55%, 80%, 85%, and 85% of patients by using the maximum dose, and for 37%, 68%, 79%, 79%, and 84% of patients by using the 2-cc dose. Using the first 1, 2, 3, 4 or 5 fractions, we predicted the final rectal dose to within 10% for 100%, 100%, 100%, 100%, and 100% of patients by using the ICRU dose, and for 60%, 65%, 70%, 75%, and 75% of patients by using the maximum dose, and for 68%, 95%, 84%, 84%, and 84% of patients by using the 2-cc dose. Conclusions: Doses to organs at risk vary depending on the calculation method. In some cases, final dose accuracy appears to plateau after the third fraction, indicating that simulation and planning may not be necessary in all fractions. A clinically relevant level of accuracy should be determined and further research conducted to address
Rampado, Osvaldo, E-mail: orampado@cittadellasalute.to.it; Giglioli, Francesca Romana; Rossetti, Veronica; Ropolo, Roberto [Struttura Complessa Fisica Sanitaria, Azienda Ospedaliero Universitaria Città della Salute e della Scienza, Corso Bramante 88, Torino 10126 (Italy); Fiandra, Christian; Ragona, Riccardo [Radiation Oncology Department, University of Turin, Torino 10126 (Italy)
2016-05-15
Purpose: The aim of this study was to evaluate various approaches for assessing patient organ doses resulting from radiotherapy cone-beam CT (CBCT), by the use of thermoluminescent dosimeter (TLD) measurements in anthropomorphic phantoms, a Monte Carlo based dose calculation software, and different dose indicators as presently defined. Methods: Dose evaluations were performed on a CBCT Elekta XVI (Elekta, Crawley, UK) for different protocols and anatomical regions. The first part of the study focuses on using PCXMC software (PCXMC 2.0, STUK, Helsinki, Finland) for calculating organ doses, adapting the input parameters to simulate the exposure geometry, and beam dose distribution in an appropriate way. The calculated doses were compared to readouts of TLDs placed in an anthropomorphic Rando phantom. After this validation, the software was used for analyzing organ dose variability associated with patients’ differences in size and gender. At the same time, various dose indicators were evaluated: kerma area product (KAP), cumulative air-kerma at the isocenter (K{sub air}), cone-beam dose index, and central cumulative dose. The latter was evaluated in a single phantom and in a stack of three adjacent computed tomography dose index phantoms. Based on the different dose indicators, a set of coefficients was calculated to estimate organ doses for a range of patient morphologies, using their equivalent diameters. Results: Maximum organ doses were about 1 mGy for head and neck and 25 mGy for chest and pelvis protocols. The differences between PCXMC and TLDs doses were generally below 10% for organs within the field of view and approximately 15% for organs at the boundaries of the radiation beam. When considering patient size and gender variability, differences in organ doses up to 40% were observed especially in the pelvic region; for the organs in the thorax, the maximum differences ranged between 20% and 30%. Phantom dose indexes provided better correlation with organ
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...
49 CFR 174.86 - Maximum allowable operating speed.
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15...
Parametric optimization of thermoelectric elements footprint for maximum power generation
Rezania, A.; Rosendahl, Lasse; Yin, Hao
2014-01-01
The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost-perform...
30 CFR 56.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 56.19066 Section 56.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 56.19066 Maximum riders in a conveyance. In shafts inclined over 45...
30 CFR 57.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 57.19066 Section 57.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 57.19066 Maximum riders in a conveyance. In shafts inclined over 45...
Maximum Atmospheric Entry Angle for Specified Retrofire Impulse
T. N. Srivastava
1969-07-01
Full Text Available Maximum atmospheric entry angles for vehicles initially moving in elliptic orbits are investigated and it is shown that tangential retrofire impulse at the apogee results in the maximum entry angle. Equivalence of maximizing the entry angle and minimizing the retrofire impulse is also established.
5 CFR 838.711 - Maximum former spouse survivor annuity.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the...
46 CFR 151.45-6 - Maximum amount of cargo.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Maximum amount of cargo. 151.45-6 Section 151.45-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES BARGES CARRYING BULK LIQUID HAZARDOUS MATERIAL CARGOES Operations § 151.45-6 Maximum amount of cargo. (a)...
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... rate effective on the date the supplemental annuity begins, before any reduction for a private pension... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52...
49 CFR 195.406 - Maximum operating pressure.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a...
Maximum-entropy clustering algorithm and its global convergence analysis
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Distribution of maximum loss of fractional Brownian motion with drift
Çağlar, Mine; Vardar-Acar, Ceren
2013-01-01
In this paper, we find bounds on the distribution of the maximum loss of fractional Brownian motion with H >= 1/2 and derive estimates on its tail probability. Asymptotically, the tail of the distribution of maximum loss over [0, t] behaves like the tail of the marginal distribution at time t.
48 CFR 436.575 - Maximum workweek-construction schedule.
2010-10-01
...-construction schedule. 436.575 Section 436.575 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE... Maximum workweek-construction schedule. The contracting officer shall insert the clause at 452.236-75, Maximum Workweek-Construction Schedule, if the clause at FAR 52.236-15 is used and the contractor's...
30 CFR 57.5039 - Maximum permissible concentration.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum permissible concentration. 57.5039... Maximum permissible concentration. Except as provided by standard § 57.5005, persons shall not be exposed to air containing concentrations of radon daughters exceeding 1.0 WL in active workings. ...
5 CFR 550.105 - Biweekly maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Biweekly maximum earnings limitation. 550.105 Section 550.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.105 Biweekly...
5 CFR 550.106 - Annual maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Annual maximum earnings limitation. 550.106 Section 550.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.106 Annual...
32 CFR 842.35 - Depreciation and maximum allowances.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide”...
An Internal Dose Assessment Associated with Personal Food Intake
Lee, Joeun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of); Hwang, Wontae [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-10-15
ICRP (International Commission on Radiological Protection), Therefore, had recommended the concept of 'Critical Group'. Recently the ICRP has recommended the use of 'Representative Person' on the new basic recommendation 103. On the other hand the U.S. NRC (Nuclear Regulatory Commission) has adopted more conservative concept, 'Maximum Exposed Individuals (MEI)' of critical Group. The dose assessment in Korea is based on MEI. Although dose assessment based on MEI is easy to receive the permission of the regulatory authority, it is not efficient. Meanwhile, the internal dose by food consumption takes an important part. Therefore, in this study, the internal dose assessment was performed in accordance with ICRP's new recommendations. The internal dose assessment was performed in accordance with ICRP's new recommendations. It showed 13.2% decreased of the annual internal dose due to gaseous effluents by replacing MEI to the concept of representative person. Also, this calculation based on new ICRP's recommendation has to be extended to all areas of individual dose assessment. Then, more accurate and efficient values might be obtained for dose assessment.
Estradiol valerate and alcohol intake: dose-response assessments
Quirarte, Gina L; Reid, Larry D; de la Teja, I Sofía Ledesma; Reid, Meta L; Sánchez, Marco A; Díaz-Trujillo, Arnulfo; Aguilar-Vazquez, Azucena; Prado-Alcalá, Roberto A
2007-01-01
Background An injection of estradiol valerate (EV) provides estradiol for a prolonged period. Recent research indicates that a single 2.0 mg injection of EV modifies a female rat's appetite for alcoholic beverages. This research extends the initial research by assessing 8 doses of EV (from .001 to 2.0 mg/female rat), as well assessing the effects of 2.0 mg EV in females with ovariectomies. Results With the administration of EV, there was a dose-related loss of bodyweight reaching the maximum loss, when it occurred, at about 4 days after injections. Subsequently, rats returned to gaining weight regularly. Of the doses tested, only the 2.0 mg dose produced a consistent increase in intake of ethanol during the time previous research indicated that the rats would show enhanced intakes. There was, however, a dose-related trend for smaller doses to enhance intakes. Rats with ovariectomies showed a similar pattern of effects, to intact rats, with the 2 mg dose. After extensive histories of intake of alcohol, both placebo and EV-treated females had estradiol levels below the average measured in females without a history of alcohol-intake. Conclusion The data support the conclusion that pharmacological doses of estradiol can produce enduring changes that are manifest as an enhanced appetite for alcoholic beverages. The effect can occur among females without ovaries. PMID:17335585
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Experimental study on prediction model for maximum rebound ratio
LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong
2007-01-01
The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.
Bulla, Stefan, E-mail: stefan.bulla@uniklinik-freiburg.de [Department of Diagnostic Radiology, University Hospital Freiburg, Hugstetter Str. 55, 79106 Freiburg (Germany); Blanke, Philipp, E-mail: philipp.blanke@uniklinik.freiburg.de [Department of Diagnostic Radiology, University Hospital Freiburg, Hugstetter Str. 55, 79106 Freiburg (Germany); Hassepass, Frederike, E-mail: frederike.hassepass@uniklinik.freiburg.de [Department of Otorhinolaryngology – Head and Neck Surgery, University Hospital Freiburg, Killianstraße 5, 79106 Freiburg (Germany); Krauss, Tobias, E-mail: tobias.krauss@uniklinik.freiburg.de [Department of Diagnostic Radiology, University Hospital Freiburg, Hugstetter Str. 55, 79106 Freiburg (Germany); Winterer, Jan Thorsten, E-mail: jan.winterer@uniklinik.freiburg.de [Department of Diagnostic Radiology, University Hospital Freiburg, Hugstetter Str. 55, 79106 Freiburg (Germany); Breunig, Christine, E-mail: christine.breunig@uniklinik.freiburg.de [Department of Otorhinolaryngology – Head and Neck Surgery, University Hospital Freiburg, Killianstraße 5, 79106 Freiburg (Germany); Langer, Mathias, E-mail: mathias.langer@uniklinik.freiburg.de [Department of Diagnostic Radiology, University Hospital Freiburg, Hugstetter Str. 55, 79106 Freiburg (Germany); Pache, Gregor [Department of Diagnostic Radiology, University Hospital Freiburg, Hugstetter Str. 55, 79106 Freiburg (Germany)
2012-09-15
Purpose: To evaluate image quality of dose-reduced CT of the paranasal-sinus using an iterative reconstruction technique. Methods: In this study 80 patients (mean age: 46.9 ± 18 years) underwent CT of the paranasalsinus (Siemens Definition, Forchheim, Germany), with either standard settings (A: 120 kV, 60 mAs) reconstructed with conventional filtered back projection (FBP) or with tube current–time product lowering of 20%, 40% and 60% (B: 48 mAs, C: 36 mAs and D: 24 mAs) using iterative reconstruction (n = 20 each). Subjective image quality was independently assessed by four blinded observers using a semiquantitative five-point grading scale (1 = poor, 5 = excellent). Effective dose was calculated from the dose-length product. Mann–Whitney-U-test was used for statistical analysis. Results: Mean effective dose was 0.28 ± 0.03 mSv(A), 0.23 ± 0.02 mSv(B), 0.17 ± 0.02 mSv(C) and 0.11 ± 0.01 mSv(D) resulting in a maximum dose reduction of 60% with iterative reconstruction technique as compared to the standard low-dose CT. Best image quality was observed at 48 mAs (mean 4.8; p < 0.05), whereas standard low-dose CT (A) and maximum dose reduced scans (D) showed no significant difference in subjective image quality (mean 4.37 (A) and 4.31 (B); p = 0.72). Interobserver agreement was excellent (κ values 0.79–0.93). Conclusion: As compared to filtered back projection, the iterative reconstruction technique allows for significant dose reduction of up to 60% for paranasal-sinus CT without impairing the diagnostic image quality.
Dose distributions around selectron applicators
Pla, C.; Evans, M.D.; Podgorsak, E.B.
1987-11-01
Measured and calculated dose distributions around selectron applicators, loaded with /sup 60/Co high dose rate pellets, are presented. The effect of the stopping screw, spacers, pellets themselves and the applicator wall on the dose distribution is discussed. The measured dose distribution is in almost perfect agreement with the calculated distribution in planes perpendicular to the applicator axis and containing a source. On the applicator axis directly below the applicator the measured dose amounts to about 75% of the calculated value, when only the stopping screw attenuates the beam from a pellet. When the beam is attenuated by spacers in addition to the stopping screw, the discrepancy between the calculated and measured dose may exceed 50%. Clinically relevant source geometries are also discussed. It is shown that for most regions around the applicator the method of a simple addition of dose contributions from individual point sources is an acceptable approximation for the calculation of dose distributions around the selectron applicators.
Evaluating dose response from flexible dose clinical trials
Baron David
2008-01-01
Full Text Available Abstract Background The true dose effect in flexible-dose clinical trials may be obscured and even reversed because dose and outcome are related. Methods To evaluate dose effect in response on primary efficacy scales from 2 randomized, double-blind, flexible-dose trials of patients with bipolar mania who received olanzapine (N = 234, 5–20 mg/day, or patients with schizophrenia who received olanzapine (N = 172, 10–20 mg/day, we used marginal structural models, inverse probability of treatment weighting (MSM, IPTW methodology. Dose profiles for mean changes from baseline were evaluated using weighted MSM with a repeated measures model. To adjust for selection bias due to non-random dose assignment and dropouts, patient-specific time-dependent weights were determined as products of (i stable weights based on inverse probability of receiving the sequence of dose assignments that was actually received by a patient up to given time multiplied by (ii stable weights based on inverse probability of patient remaining on treatment by that time. Results were compared with those by unweighted analyses. Results While the observed difference in efficacy scores for dose groups for the unweighted analysis strongly favored lower doses, the weighted analyses showed no strong dose effects and, in some cases, reversed the apparent "negative dose effect." Conclusion While naïve comparison of groups by last or modal dose in a flexible-dose trial may result in severely biased efficacy analyses, the MSM with IPTW estimators approach may be a valuable method of removing these biases and evaluating potential dose effect, which may prove useful for planning confirmatory trials.
Hanford Environmental Dose Reconstruction Project
Finch, S.M.; McMakin, A.H. (comps.)
1992-02-01
The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates.
Preliminary dose assessment of the Chernobyl accident
Hull, A.P.
1987-01-01
From the major accident at Unit 4 of the Chernobyl nuclear power station, a plume of airborne radioactive fission products was initially carried northwesterly toward Poland, thence toward Scandinavia and into Central Europe. Reports of the levels of radioactivity in a variety of media and of external radiation levels were collected in the Department of Energy's Emergency Operations Center and compiled into a data bank. Portions of these and other data which were obtained directly from published and official reports were utilized to make a preliminary assessment of the extent and magnitude of the external dose to individuals downwind from Chernobyl. Radioactive /sup 131/I was the predominant fission product. The time of arrival of the plume and the maximum concentrations of /sup 131/I in air, vegetation and milk and the maximum reported depositions and external radiation levels have been tabulated country by country. A large amount of the total activity in the release was apparently carried to a significant elevation. The data suggest that in areas where rainfall occurred, deposition levels were from ten to one-hundred times those observed in nearby ''dry'' locations. Sufficient spectral data were obtained to establish average release fractions and to establish a reference spectra of the other nuclides in the release. Preliminary calculations indicated that the collective dose equivalent to the population in Scandinavia and Central Europe during the first year after the Chernobyl accident would be about 8 x 10/sup 6/ person-rem. From the Soviet report, it appears that a first year population dose of about 2 x 10/sup 7/ person-rem (2 x 10/sup 5/ Sv) will be received by the population who were downwind of Chernobyl within the U.S.S.R. during the accident and its subsequent releases over the following week. 32 refs., 14 figs., 20 tabs.
Oliveira, Bruno B., E-mail: bbo@cdtn.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil). Pos-graduacao em Ciencias e Tecnologia das Radiacoes, Minerais e Materiais; Mourao, Arnaldo P. [Centro Federal de Educacao Tecnologica de Minas Gerais (CEFET/MG), Belo Horizonte, MG (Brazil); Alonso, Thessa C.; Silva, Teogenes A. da [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)
2010-07-01
For the optimization of the patient dose in computed tomography (CT), the Brazilian legislation only established the diagnostic reference levels (DRL's) in terms of Multiple Scan Average Dose (MSAD) in a typical adult as a parameter of quality control of CT scanners. Conformity to the DRL's can be verified by measuring the dose distribution in CT scans and MSAD determination. An analysis of the quality of CT scans of the metropolitan region of Belo Horizonte is necessary by conducting pertinent tests to the study that are presented in the ANVISA (National Agency of Sanitary Vigilance) Guide. The purpose of this study is to investigate, in a chest scan, the variation of dose in CT. To measure the dose profile are used lithium fluoride thermoluminescent dosimeters (TLD-100 Rod) distributed in cylinders positioned in peripheral and central regions of a phantom of polymethylmethacrylate (PMMA). The data obtained allow us to observe the variation of the dose profile inside the phantom. The peripheral region shows higher dose values than the central region. The longitudinal variation can be observed and the maximum dose was recorded at the edges of the phantom (41,58{+-}5,10) mGy at the midpoint of the longitudinal axis. The results will contribute to disseminate the proper procedure and optimize the dosimetry and the tests of quality control in CT, as well as make a critical analysis of the DRL's. (author)
2010-07-01
... as specified in 40 CFR 1065.610. This is the maximum in-use engine speed used for calculating the NOX... procedures of 40 CFR part 1065, based on the manufacturer's design and production specifications for the..., power density, and maximum in-use engine speed. 1042.140 Section 1042.140 Protection of...
Krohn, Thomas; Hänscheid, Heribert; Müller, Berthold; Behrendt, Florian F; Heinzel, Alexander; Mottaghy, Felix M; Verburg, Frederik A
2014-01-01
CONTEXT: The determinants of successful (131)I therapy of Graves' disease (GD) are unclear. OBJECTIVE: To relate dosimetry parameters to outcome of therapy to identify significant determinants eu- and/or hypothyroidism after (131)I therapy in patients with GD. SETTING AND DESIGN: A retrospective stu
Benjamin T. Cooper, MD
2016-10-01
Conclusions: Placing the edge of the tangents at least 2.5 mm from the closest point of the contoured LAD is likely to assure LADmax is <10 Gy and LADmean is <3.3 Gy in patients treated with prone accelerated breast radiation therapy.
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Maximum frequency of the decametric radiation from Jupiter
Barrow, C. H.; Alexander, J. K.
1980-01-01
The upper frequency limits of Jupiter's decametric radio emission are found to be essentially the same when observed from the earth or, with considerably higher sensitivity, from the Voyager spacecraft close to Jupiter. This suggests that the maximum frequency is a real cut-off corresponding to a maximum gyrofrequency of about 38-40 MHz at Jupiter. It no longer appears to be necessary to specify different cut-off frequencies for the Io and non-Io emission as the maximum frequencies are roughly the same in each case.
Tateoka, Kunihiko; Oouchi, Atsushi; Nakata, Kensei; Hareyama, Masato
2008-07-01
In this study, we examined the ability of an L-EPID to verify rectangular and irregular fields and to measure the transmitted exit doses. With respect to the beam profile of rectangular and irregular fields and the doses transmitted through an inhomogeneous phantom, the L-EPID dose obtained from the L-EPID measurement was compared with the conventional dose measured by use of a 0.12-cc ionization chamber and a 3D water phantom. In the comparison of the rectangular and irregular fields, the difference in the off-center ratio (OCR) between the L-EPID dose and the conventional dose was approximately 3% in the steep-dose-gradient region (penumbra regions, >30%/cm) and approximately +/-0.5% in the gentle-dose-gradient region (5%/cm). On the other hand, the dose differences between the L-EPID and the measured doses were less than approximately 2% in the gentle-dose-gradient region. In addition, in the steep-dose-gradient region, the maximum difference was 30%. However, the differences in the distance-to-agreement (DTA) were less than approximately +/-1 mm and were unrelated to the dose gradient. These results suggest that dose verification by L-EPID is very useful in clinical applications.
Phase I dose intensification study of 2-weekly epirubicin with GM-CSF in advanced cancer.
Michael, M; Toner, G C; Olver, I N; Fenessy, A; Bishop, J F
1997-06-01
This study investigated dose intensification of epirubicin administered as a 2-weekly regimen with granulocyte-macrophage colony-stimulating factor (GM-CSF) support. The aim was to define the maximally tolerated dose of epirubicin and to assess the efficacy of GM-CSF to ameliorate its toxicity. Patients with anthracycline-responsive advanced malignancies were eligible. Six dose levels, commencing at 90 mg/m2, of epirubicin administered every 2 weeks for four courses were planned with GM-CSF 10 micrograms/kg/day administered for 10 days from the second day of each course. Six patients were to be entered at each dose level, and escalation to the next level was based upon toxicity criteria. Twelve patients were entered, six at dose level 1 (90 mg/m2) and six at dose level 2 (120 mg/m2). Prospectively defined haematological dose-limiting toxicities were noted in one patient at dose level 1 and in five patients at dose level 2. Further dose escalation was not attempted. Significant nonhaematological toxicities included febrile neutropenia in two and four patients at dose levels 1 and 2, respectively. This study has demonstrated that epirubicin can be safely administered at 2 week intervals with GM-CSF at a dose of 90 mg/m2, equivalent to the previously reported maximum tolerated dose intensity of 45 mg/m2/week. Neutropenia was dose-limiting despite the use of GM-CSF.
Assessment of patient and occupational dose in established and new applications of MDCT fluoroscopy.
Joemai, Raoul M S; Zweers, Dirk; Obermann, Wim R; Geleijns, Jacob
2009-04-01
This study aimed to assess patient dose and occupational dose in established and new applications of MDCT fluoroscopy. Electronic personal dosimeters were used to measure occupational dose equivalent. Effective patient dose was derived from the recorded dose-length product. Acquisition parameters that were observed during CT fluoroscopy (CTF) provided the basis for the estimation of an entrance skin dose profile. Two hundred ten CT-guided interventional procedures were included in the study. The median effective patient dose was 10 mSv (range, 0.1-235 mSv; 107 procedures). The median peak entrance skin dose was 0.4 Sv (0.1-2.1 Sv; 27 procedures). From 547 measurements of occupational dose equivalent, a median occupational effective dose of 3 muSv per procedure was derived for the interventional radiologists and 0.4 muSv per procedure for the assisting radiologists and radiology technologists. The estimated maximum occupational effective dose reached 0.4 mSv. The study revealed high effective patient doses, up to 235 mSv, mainly for relatively new applications such as CTF-guided radiofrequency ablations using MDCT, vertebroplasty, and percutaneous ethanol injections of tumors. Entrance doses were occasionally in the range of the warning level for deterministic skin effects but were always below the threshold for serious deterministic effects. The complexity of the procedure, expected benefits of the treatment, and general health state of the patient contribute to the justification of observed high effective patient doses.
Dose-response relationship for breast cancer induction at radiotherapy dose
Gruber Günther
2011-06-01
Full Text Available Abstract Purpose Cancer induction after radiation therapy is known as a severe side effect. It is therefore of interest to predict the probability of second cancer appearance for the patient to be treated including breast cancer. Materials and methods In this work a dose-response relationship for breast cancer is derived based on (i the analysis of breast cancer induction after Hodgkin's disease, (ii a cancer risk model developed for high doses including fractionation based on the linear quadratic model, and (iii the reconstruction of treatment plans for Hodgkin's patients treated with radiotherapy, (iv the breast cancer induction of the A-bomb survivor data. Results The fitted model parameters for an α/β = 3 Gy were α = 0.067Gy-1 and R = 0.62. The risk for breast cancer is according to this model for small doses consistent with the finding of the A-bomb survivors, has a maximum at doses of around 20 Gy and drops off only slightly at larger doses. The predicted EAR for breast cancer after radiotherapy of Hodgkin's disease is 11.7/10000PY which can be compared to the findings of several epidemiological studies where EAR for breast cancer varies between 10.5 and 29.4/10000PY. The model was used to predict the impact of the reduction of radiation volume on breast cancer risk. It was estimated that mantle field irradiation is associated with a 3.2-fold increased risk compared with mediastinal irradiation alone, which is in agreement with a published value of 2.7. It was also shown that the modelled age dependency of breast cancer risk is in satisfying agreement with published data. Conclusions The dose-response relationship obtained in this report can be used for the prediction of radiation induced secondary breast cancer of radiotherapy patients.
Dose assessment in accordance with the measured position of size specific dose estimates
Kim, Jung Su [Dept. of Radio-technology, Health Welfare, Wonkwang Health Science University, Iksan (Korea, Republic of); Hong, Sung Wan [Dept. of Radiology, Inje University Ilsan Paik Hospital, Iksan (Korea, Republic of); Kim, Jung Min [Dept. of Radiological Science, Korea University, Seoul (Korea, Republic of)
2015-12-15
This study investigated the size specific dose estimates of difference localizer on pediatric CT image. Seventy one cases of pediatric abdomen-pelvic CT (M:F=36:35) were included in this study. Anterior-posterior and lateral diameters were measured in axial CT images. Conversion factors from American Association of Physicists in Medicine (AAPM) report 204 were obtained for effective diameter to determine size specific dose estimate (SSDE) from the CT dose index volume (CTDIvol) recorded from the dose reports. For the localizer of mid-slice SSDE was 107.63% higher than CTDIvol and that of xiphoid-process slices SSDE was higher than 92.91%. The maximum error of iliac crest slices, xiphoid process slices and femur head slices between mid-slices were 7.48%, 17.81% and 14.04%. In conclusion, despite the SSDE of difference localizer has large number of errors, SSDE should be regarded as the primary evaluation tool of the patient radiation in pediatric CT for evaluation.
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling; Wang Jun
2013-01-01
In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain...
Maximum Principle for Nonlinear Cooperative Elliptic Systems on IR N
LEADI Liamidi; MARCOS Aboubacar
2011-01-01
We investigate in this work necessary and sufficient conditions for having a Maximum Principle for a cooperative elliptic system on the whole (IR)N.Moreover,we prove the existence of solutions by an approximation method for the considered system.
Maximum Likelihood Factor Structure of the Family Environment Scale.
Fowler, Patrick C.
1981-01-01
Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)
Multiresolution maximum intensity volume rendering by morphological adjunction pyramids
Roerdink, Jos B.T.M.
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Multiresolution Maximum Intensity Volume Rendering by Morphological Adjunction Pyramids
Roerdink, Jos B.T.M.
2001-01-01
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Changes in context and perception of maximum reaching height.
Wagman, Jeffrey B; Day, Brian M
2014-01-01
Successfully performing a given behavior requires flexibility in both perception and behavior. In particular, doing so requires perceiving whether that behavior is possible across the variety of contexts in which it might be performed. Three experiments investigated how (changes in) context (ie point of observation and intended reaching task) influenced perception of maximum reaching height. The results of experiment 1 showed that perceived maximum reaching height more closely reflected actual reaching ability when perceivers occupied a point of observation that was compatible with that required for the reaching task. The results of experiments 2 and 3 showed that practice perceiving maximum reaching height from a given point of observation improved perception of maximum reaching height from a different point of observation, regardless of whether such practice occurred at a compatible or incompatible point of observation. In general, such findings show bounded flexibility in perception of affordances and are thus consistent with a description of perceptual systems as smart perceptual devices.
Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)
U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Maximum Photovoltaic Penetration Levels on Typical Distribution Feeders: Preprint
Hoke, A.; Butler, R.; Hambrick, J.; Kroposki, B.
2012-07-01
This paper presents simulation results for a taxonomy of typical distribution feeders with various levels of photovoltaic (PV) penetration. For each of the 16 feeders simulated, the maximum PV penetration that did not result in steady-state voltage or current violation is presented for several PV location scenarios: clustered near the feeder source, clustered near the midpoint of the feeder, clustered near the end of the feeder, randomly located, and evenly distributed. In addition, the maximum level of PV is presented for single, large PV systems at each location. Maximum PV penetration was determined by requiring that feeder voltages stay within ANSI Range A and that feeder currents stay within the ranges determined by overcurrent protection devices. Simulations were run in GridLAB-D using hourly time steps over a year with randomized load profiles based on utility data and typical meteorological year weather data. For 86% of the cases simulated, maximum PV penetration was at least 30% of peak load.
16 CFR 1505.8 - Maximum acceptable material temperatures.
2010-01-01
... Association, 155 East 44th Street, New York, NY 10017. Material Degrees C. Degrees F. Capacitors (1) (1) Class... capacitor has no marked temperature limit, the maximum acceptable temperature will be assumed to be 65...
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC GIS Inventory (aka Ramona) — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
PREDICTION OF MAXIMUM DRY DENSITY OF LOCAL GRANULAR ...
methods. A test on a soil of relatively high solid density revealed that the developed relation looses ... where, Pd max is the laboratory maximum dry ... Addis-Jinima Road Rehabilitation. ..... data sets that differ considerably in the magnitude.
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC Education | GIS Inventory — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
Solar Panel Maximum Power Point Tracker for Power Utilities
Sandeep Banik,
2014-01-01
Full Text Available ―Solar Panel Maximum Power Point Tracker For power utilities‖ As the name implied, it is a photovoltaic system that uses the photovoltaic array as a source of electrical power supply and since every photovoltaic (PV array has an optimum operating point, called the maximum power point, which varies depending on the insolation level and array voltage. A maximum power point tracker (MPPT is needed to operate the PV array at its maximum power point. The objective of this thesis project is to build a photovoltaic (PV array Of 121.6V DC Voltage(6 cell each 20V, 100watt And convert the DC voltage to Single phase 120v,50Hz AC voltage by switch mode power converter‘s and inverter‘s.
A Family of Maximum SNR Filters for Noise Reduction
Huang, Gongping; Benesty, Jacob; Long, Tao;
2014-01-01
This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Hong, Linda X., E-mail: lhong0812@gmail.com [Department of Radiation Oncology, Montefiore Medical Center, Bronx, NY (United States); Department of Radiation Oncology, Albert Einstein College of Medicine, Bronx, NY (United States); Shankar, Viswanathan [Department of Epidemiology and Population Health, Albert Einstein College of Medicine, Bronx, NY (United States); Shen, Jin [Department of Radiation Oncology, Montefiore Medical Center, Bronx, NY (United States); Kuo, Hsiang-Chi [Department of Radiation Oncology, Montefiore Medical Center, Bronx, NY (United States); Department of Radiation Oncology, Albert Einstein College of Medicine, Bronx, NY (United States); Mynampati, Dinesh [Department of Radiation Oncology, Montefiore Medical Center, Bronx, NY (United States); Yaparpalvi, Ravindra [Department of Radiation Oncology, Montefiore Medical Center, Bronx, NY (United States); Department of Radiation Oncology, Albert Einstein College of Medicine, Bronx, NY (United States); Goddard, Lee [Department of Radiation Oncology, Montefiore Medical Center, Bronx, NY (United States); Basavatia, Amar; Fox, Jana; Garg, Madhur; Kalnicki, Shalom; Tomé, Wolfgang A. [Department of Radiation Oncology, Montefiore Medical Center, Bronx, NY (United States); Department of Radiation Oncology, Albert Einstein College of Medicine, Bronx, NY (United States)
2015-10-01
We report our experience of establishing planning objectives to achieve dose coverage, conformity, and dose falloff for spine stereotactic body radiation therapy (SBRT) plans. Patients with spine lesions were treated using SBRT in our institution since September 2009. Since September 2011, we established the following planning objectives for our SBRT spine plans in addition to the cord dose constraints: (1) dose coverage—prescription dose (PD) to cover at least 95% planning target volume (PTV) and 90% PD to cover at least 99% PTV; (2) conformity index (CI)—ratio of prescription isodose volume (PIV) to the PTV < 1.2; (3) dose falloff—ratio of 50% PIV to the PTV (R{sub 50%}); (4) and maximum dose in percentage of PD at 2 cm from PTV in any direction (D{sub 2cm}) to follow Radiation Therapy Oncology Group (RTOG) 0915. We have retrospectively reviewed 66 separate spine lesions treated between September 2009 and December 2012 (31 treated before September 2011 [group 1] and 35 treated after [group 2]). The χ{sup 2} test was used to examine the difference in parameters between groups. The PTV V{sub 100%} {sub PD} ≥ 95% objective was met in 29.0% of group 1 vs 91.4% of group 2 (p < 0.01) plans. The PTV V{sub 90%} {sub PD} ≥ 99% objective was met in 38.7% of group 1 vs 88.6% of group 2 (p < 0.01) plans. Overall, 4 plans in group 1 had CI > 1.2 vs none in group 2 (p = 0.04). For D{sub 2cm}, 48.3% plans yielded a minor violation of the objectives and 16.1% a major violation for group 1, whereas 17.1% exhibited a minor violation and 2.9% a major violation for group 2 (p < 0.01). Spine SBRT plans can be improved on dose coverage, conformity, and dose falloff employing a combination of RTOG spine and lung SBRT protocol planning objectives.
Antimalarial and analgesic activities of ethanolic leaf extract of Panicum maximum
JudeEOkokon; PaulANwafor; UkemeEAndrew
2011-01-01
Objective: To evaluate antiplasmodial and analgesic activities of ethanolic leaf extract/fractions of Panicum maximum. Methods:The crude leaf extract (47-190 mg/kg) and fractions (chloroform, ethyl acqeous and methanol; 96 mg/kg) of Panicum maximum were investigated for antiplasmodial activity against chloroquine sensitive Plasmodium berghei infections in mice and for analgesic activity against chemical and heat-induced pains. The antiplasmodial activity during early and established infections as well as prophylactic were investigated. Artesunate at 5 mg/kg and pyrimethamine at 1.2 mg/kg were used as positive controls. Analgesic activity of the crude extract/fractions was also evaluated against acetic acid, formalin and heat-induced pains. Results:The extract and its fractions dose-dependently reduced parasitaemia induced by chloroquine sensitive Plasmodium berghei infection in prophylactic, suppressive and curative models in mice. These reductions were statistically significant (P<0.001). They also improved the mean survival time from 13 to 28 days compared with control (P<0.001).The activities of extract/fractions were incomparable to that of the standard drugs (Artesunate and pyrimethamine). On chemically and thermally-induced pains,the extract inhibited acetic acid and formalin-induced inflammation as well as hot plate-induced pain in mice. These inhibitions were statistically significant (P<0.001) and in a dose-dependent fashion. Conclusions:Panicum maximum leaf extract has antiplasmodial and analgesic activities which may in part be mediated through the chemical constituents of the plant.
Medeiros, R.B.; Murata, C.H.; Moreira, A.C., E-mail: rbitelli2012@gmail.com, E-mail: camila.murata@gmail.com, E-mail: antonio.xray@gmail.com [Universidade Federal de Sao Paulo (UNIFESP), Sao Paulo, SP (Brazil). Escola Pulista de Medicina; Khoury, H.J.; Borras, C., E-mail: hjkhoury@gmail.com, E-mail: cariborras@starpower.net [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Dept. de Engenharia Nuclear; Silva, M.S.R da, E-mail: msrochas2003@yahoo.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil)
2014-07-01
The radiation doses from interventional procedures is relevant when treating children because of their greater radiosensitivity compared with adults. The purposes of this paper were to estimate the dose received by 18 pediatric patients who underwent cardiac interventional procedures and to correlate the maximum entrance surface air kerma (Ke,max), estimated with radiochromic films, with the cumulative air kerma values displayed at the end of procedures. This study was performed in children up to 6 years. The study was performed in two hospitals, one located in Recife and the other one in São Paulo. The x-ray imaging systems used were Phillips Allura 12 model with image intensifier system and a Phillips Allura FD10 flat panel system. To estimate the Ke,max on the patient’s skin radiochromic films(Gafchromic XR-RV2) were used. These values were estimated from the maximum optical density measured on film using a calibration curve. The results showed cumulative air kerma values ranging from 78.3- 500.0mGy, with a mean value of 242,3 mGy. The resulting Ke,max values ranged from 20.0-461.8 mGy, with a mean value of 208,8 mGy. The Ke,max values were correlated with the displayed cumulative air kerma values. The correlation factor R² was 0.78, meaning that the value displayed in the equipment’s console can be useful for monitoring the skin absorbed dose throughout the procedure. The routine fluoroscopy time records is not able by itself alert the physician about the risk of dose exceeding the threshold of adverse reactions, which can vary from an early erythema to serious harmful skin damage. (author)
Ferruccio, Lauren F; Murray, Cindy; Yee, Karen W; Incekol, Diana; Lee, Roy; Paisley, Emma; Ng, Pamela
2016-08-01
The azacitidine (Vidaza®) product monograph indicates that doses greater than 4 ml should be divided equally into two syringes and injected into different sites. Although 2 ml is a more commonly used maximum volume for subcutaneous injections, there is a lack of evidence to support the use of any given maximum volume with azacitidine. Applying the status quo of 2 ml to azacitidine results in patients receiving 3-4 injections per visit. This prospective study evaluated the frequency and type of injection site reactions when the maximum subcutaneous injection volume was increased from 2 to 3 ml per injection site. Among 30 patients, 309 doses were administered, and injection site reactions were noted in 92.9% of all doses, with the majority (82.2%) being grade 1; only 10.7% of doses resulted in grade 2 reactions, and there were no grade 3 or 4 reactions. There was no increase in frequency or severity of injection site reactions when the maximum volume was increased to 3 ml. The median number of injections that patients received per visit decreased from 3 to 2 after the volume was increased, and there was a statistically significant reduction in the incidence of pain. Decreasing the number of injections also facilitates ease of rotation of injection sites and decreases pharmacy preparation time. This is the first time that injection site reaction data relating to injection volume have been reported for azacitidine. © The Author(s) 2015.
On the maximum sufficient range of interstellar vessels
Cartin, Daniel
2011-01-01
This paper considers the likely maximum range of space vessels providing the basis of a mature interstellar transportation network. Using the principle of sufficiency, it is argued that this range will be less than three parsecs for the average interstellar vessel. This maximum range provides access from the Solar System to a large majority of nearby stellar systems, with total travel distances within the network not excessively greater than actual physical distance.
Efficiency at Maximum Power of Interacting Molecular Machines
Golubeva, Natalia; Imparato, Alberto
2012-01-01
We investigate the efficiency of systems of molecular motors operating at maximum power. We consider two models of kinesin motors on a microtubule: for both the simplified and the detailed model, we find that the many-body exclusion effect enhances the efficiency at maximum power of the many- motor...... system, with respect to the single motor case. Remarkably, we find that this effect occurs in a limited region of the system parameters, compatible with the biologically relevant range....
Filtering Additive Measurement Noise with Maximum Entropy in the Mean
Gzyl, Henryk
2007-01-01
The purpose of this note is to show how the method of maximum entropy in the mean (MEM) may be used to improve parametric estimation when the measurements are corrupted by large level of noise. The method is developed in the context on a concrete example: that of estimation of the parameter in an exponential distribution. We compare the performance of our method with the bayesian and maximum likelihood approaches.
The maximum entropy production principle: two basic questions.
Martyushev, Leonid M
2010-05-12
The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.
A tropospheric ozone maximum over the equatorial Southern Indian Ocean
L. Zhang
2012-05-01
Full Text Available We examine the distribution of tropical tropospheric ozone (O_{3} from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O_{3} during 2005 to 2009 reveal a distinct, persistent O_{3} maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O_{3} observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O_{3} maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O_{3} maximum is dominated by the O_{3} production driven by lightning nitrogen oxides (NO_{x} emissions, which accounts for 62% of the tropospheric column O_{3} in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O_{3} maximum are rather small. The O_{3} productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O_{3} maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O_{3} maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization
Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue
2010-01-01
This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...
Spectroscopic gamma camera for use in high dose environments
Ueno, Yuichiro; Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi; Fujishima, Yasutake; Kometani, Yutaka; Suzuki, Yasuhiko; Umegaki, Kikuo
2016-06-01
We developed a pinhole gamma camera to measure distributions of radioactive material contaminants and to identify radionuclides in extraordinarily high dose regions (1000 mSv/h). The developed gamma camera is characterized by: (1) tolerance for high dose rate environments; (2) high spatial and spectral resolution for identifying unknown contaminating sources; and (3) good usability for being carried on a robot and remotely controlled. These are achieved by using a compact pixelated detector module with CdTe semiconductors, efficient shielding, and a fine resolution pinhole collimator. The gamma camera weighs less than 100 kg, and its field of view is an 8 m square in the case of a distance of 10 m and its image is divided into 256 (16×16) pixels. From the laboratory test, we found the energy resolution at the 662 keV photopeak was 2.3% FWHM, which is enough to identify the radionuclides. We found that the count rate per background dose rate was 220 cps h/mSv and the maximum count rate was 300 kcps, so the maximum dose rate of the environment where the gamma camera can be operated was calculated as 1400 mSv/h. We investigated the reactor building of Unit 1 at the Fukushima Dai-ichi Nuclear Power Plant using the gamma camera and could identify the unknown contaminating source in the dose rate environment that was as high as 659 mSv/h.
Spectroscopic gamma camera for use in high dose environments
Ueno, Yuichiro, E-mail: yuichiro.ueno.bv@hitachi.com [Research and Development Group, Hitachi, Ltd., Hitachi-shi, Ibaraki-ken 319-1221 (Japan); Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi [Research and Development Group, Hitachi, Ltd., Hitachi-shi, Ibaraki-ken 319-1221 (Japan); Fujishima, Yasutake; Kometani, Yutaka [Hitachi Works, Hitachi-GE Nuclear Energy, Ltd., Hitachi-shi, Ibaraki-ken (Japan); Suzuki, Yasuhiko [Measuring Systems Engineering Dept., Hitachi Aloka Medical, Ltd., Ome-shi, Tokyo (Japan); Umegaki, Kikuo [Faculty of Engineering, Hokkaido University, Sapporo-shi, Hokkaido (Japan)
2016-06-21
We developed a pinhole gamma camera to measure distributions of radioactive material contaminants and to identify radionuclides in extraordinarily high dose regions (1000 mSv/h). The developed gamma camera is characterized by: (1) tolerance for high dose rate environments; (2) high spatial and spectral resolution for identifying unknown contaminating sources; and (3) good usability for being carried on a robot and remotely controlled. These are achieved by using a compact pixelated detector module with CdTe semiconductors, efficient shielding, and a fine resolution pinhole collimator. The gamma camera weighs less than 100 kg, and its field of view is an 8 m square in the case of a distance of 10 m and its image is divided into 256 (16×16) pixels. From the laboratory test, we found the energy resolution at the 662 keV photopeak was 2.3% FWHM, which is enough to identify the radionuclides. We found that the count rate per background dose rate was 220 cps h/mSv and the maximum count rate was 300 kcps, so the maximum dose rate of the environment where the gamma camera can be operated was calculated as 1400 mSv/h. We investigated the reactor building of Unit 1 at the Fukushima Dai-ichi Nuclear Power Plant using the gamma camera and could identify the unknown contaminating source in the dose rate environment that was as high as 659 mSv/h.
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
Determining the optimal dose in the development of anticancer agents.
Mathijssen, Ron H J; Sparreboom, Alex; Verweij, Jaap
2014-05-01
Identification of the optimal dose remains a key challenge in drug development. For cytotoxic drugs, the standard approach is based on identifying the maximum tolerated dose (MTD) in phase I trials and incorporating this to subsequent trials. However, this strategy does not take into account important aspects of clinical pharmacology. For targeted agents, the dose-effect relationships from preclinical studies are less obvious, and it is important to change the way these agents are developed to avoid recommending drug doses for different populations without evidence of differential antitumour effects in different diseases. The use of expanded cohorts in phase I trials to better define MTD and refine dose optimization should be further explored together with a focus on efficacy rather than toxicity-based predictions. Another key consideration in dose optimization is related to interindividual pharmacokinetic variability. High variability in intra-individual pharmacokinetics has been observed for many orally-administered drugs, especially those with low bioavailability, which might complicate identification of dose-effect relationships. End-organ dysfunction, interactions with other prescription drugs, herbal supplements, adherence, and food intake can influence pharmacokinetics. It is important these variables are identified during early clinical trials and considered in the development of further phase II and subsequent large-scale phase III studies.
Quantification of Proton Dose Calculation Accuracy in the Lung
Grassberger, Clemens, E-mail: Grassberger.Clemens@mgh.harvard.edu [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States); Center for Proton Radiotherapy, Paul Scherrer Institute, Villigen (Switzerland); Daartz, Juliane; Dowdell, Stephen; Ruggieri, Thomas; Sharp, Greg; Paganetti, Harald [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States)
2014-06-01
Purpose: To quantify the accuracy of a clinical proton treatment planning system (TPS) as well as Monte Carlo (MC)–based dose calculation through measurements and to assess the clinical impact in a cohort of patients with tumors located in the lung. Methods and Materials: A lung phantom and ion chamber array were used to measure the dose to a plane through a tumor embedded in the lung, and to determine the distal fall-off of the proton beam. Results were compared with TPS and MC calculations. Dose distributions in 19 patients (54 fields total) were simulated using MC and compared to the TPS algorithm. Results: MC increased dose calculation accuracy in lung tissue compared with the TPS and reproduced dose measurements in the target to within ±2%. The average difference between measured and predicted dose in a plane through the center of the target was 5.6% for the TPS and 1.6% for MC. MC recalculations in patients showed a mean dose to the clinical target volume on average 3.4% lower than the TPS, exceeding 5% for small fields. For large tumors, MC also predicted consistently higher V5 and V10 to the normal lung, because of a wider lateral penumbra, which was also observed experimentally. Critical structures located distal to the target could show large deviations, although this effect was highly patient specific. Range measurements showed that MC can reduce range uncertainty by a factor of ∼2: the average (maximum) difference to the measured range was 3.9 mm (7.5 mm) for MC and 7 mm (17 mm) for the TPS in lung tissue. Conclusion: Integration of Monte Carlo dose calculation techniques into the clinic would improve treatment quality in proton therapy for lung cancer by avoiding systematic overestimation of target dose and underestimation of dose to normal lung. In addition, the ability to confidently reduce range margins would benefit all patients by potentially lowering toxicity.
Ng, Pamela; Incekol, Diana; Lee, Roy; Paisley, Emma; Dara, Celina; Brandle, Ian; Kaufman, Marina; Chen, Christine; Trudel, Suzanne; Tiedemann, Rodger; Reece, Donna; Kukreti, Vishal
2015-08-01
Subcutaneous injection is now commonly used as a standard for bortezomib administration. The bortezomib (Velcade(®)) product monograph recommends that intravenous injections be prepared at a concentration of 1 mg/mL, while subcutaneous injections may be prepared at a concentration of 2.5 mg/mL. Many institutions and subcutaneous administration guidelines use 2 mL as the maximum volume for subcutaneous injection. Using 2 mL as the maximum volume for injection would mean that many patients receiving bortezomib will receive two injections during each visit with common dosing parameters. In this prospective study evaluating a change to subcutaneous administration, bortezomib 1 mg/mL was administered subcutaneously at a higher maximum of 3 mL per injection site. For 57 individual patients, 339 doses were administered. Skin reactions were noted in 42% with all reactions being Grade 1 or 2. Patients tolerated subcutaneous injections well and only four patients were switched back to intravenous route. This is the first time that subcutaneous bortezomib of a volume up to a maximum of 3 mL (bortezomib 3 mg) per injection site has been reported. This higher single dose is well tolerated with limited skin reactions, no significant hypotension and facilitates ease of administration with only 5 patients needing two injections per visit. If the maximum volume for injection was kept at 2 mL, a total of 46 patients would have received two injections per visit. © The Author(s) 2014.
NOTE: Variations in skin dose associated with linac bed material at 6 MV x-ray energy
Butson, Martin J.; Cheung, Tsang; Yu, Peter K. N.; Webb, Belinda
2002-01-01
Treatment with radiotherapy x-rays at 6 MV energy produces a build-up effect whereby a smaller dose is delivered to the patient's skin compared to the tumour dose. With anterior fields, no material is normally placed over the patient's skin, thus providing the maximum skin sparing possible with the beam configuration used. A posterior beam normally passes through the treatment couch top and increases the dose delivered to the patient's skin. Both the Mylar sheeting and the support ribbing material produce a significant increase in skin dose. Measurements at 6 MV have shown that the basal cell layer dose can be increased by up to 51% of maximum dose with a carbon fibre/Mylar couch and by 28% for a tennis string/Mylar couch when compared to anterior beams. These values are associated with the position of the carbon fibre or tennis string ribbing. Dermal layer doses are increased by up to 30 and 24% of maximum dose for carbon fibre and tennis string, respectively. These values include a combination of dose due to the support ribbing and the Mylar sheeting. Due to the variability in patient positioning on the couch top, these increases would be spread out over the skin surface producing an average increase per unit area at the basal layer of up to 32 and 20% of the maximum, respectively, for carbon fibre and tennis string couch tops and 21 and 12% at the dermal layer compared to dose at Dmax.
Organ dose measurement using Optically Stimulated Luminescence Detector (OSLD) during CT examination
Yusuf, Muhammad; Alothmany, Nazeeh; Abdulrahman Kinsara, Abdulraheem
2017-10-01
This study provides detailed information regarding the imaging doses to patient radiosensitive organs from a kilovoltage computed tomography (CT) scan procedure using OSLD. The study reports discrepancies between the measured dose and the calculated dose from the ImPACT scan, as well as a comparison with the dose from a chest X-ray radiography procedure. OSLDs were inserted in several organs, including the brain, eyes, thyroid, lung, heart, spinal cord, breast, spleen, stomach, liver and ovaries, of the RANDO phantom. Standard clinical scanning protocols were used for each individual site, including the brain, thyroid, lung, breast, stomach, liver and ovaries. The measured absorbed doses were then compared with the simulated dose obtained from the ImPACT scan. Additionally, the equivalent doses for each organ were calculated and compared with the dose from a chest X-ray radiography procedure. Absorbed organ doses measured by OSLD in the RANDO phantom of up to 17 mGy depend on the organ scanned and the scanning protocols used. A maximum 9.82% difference was observed between the target organ dose measured by OSLD and the results from the ImPACT scan. The maximum equivalent organ dose measured during this experiment was equal to 99.899 times the equivalent dose from a chest X-ray radiography procedure. The discrepancies between the measured dose with the OSLD and the calculated dose from the ImPACT scan were within 10%. This report recommends the use of OSLD for measuring the absorbed organ dose during CT examination.
Real-time eye lens dose monitoring during cerebral angiography procedures
Safari, M.J.; Wong, J.H.D.; Kadir, K.A.A.; Ng, K.H. [University of Malaya, Department of Biomedical Imaging, Faculty of Medicine, Kuala Lumpur (Malaysia); University of Malaya, University of Malaya Research Imaging Centre (UMRIC), Faculty of Medicine, Kuala Lumpur (Malaysia); Thorpe, N.K.; Cutajar, D.L.; Petasecca, M.; Lerch, M.L.F.; Rosenfeld, A.B. [University of Wollongong, Centre for Medical Radiation Physics (CMRP), Wollongong, NSW (Australia)
2016-01-15
To develop a real-time dose-monitoring system to measure the patient's eye lens dose during neuro-interventional procedures. Radiation dose received at left outer canthus (LOC) and left eyelid (LE) were measured using Metal-Oxide-Semiconductor Field-Effect Transistor dosimeters on 35 patients who underwent diagnostic or cerebral embolization procedures. The radiation dose received at the LOC region was significantly higher than the dose received by the LE. The maximum eye lens dose of 1492 mGy was measured at LOC region for an AVM case, followed by 907 mGy for an aneurysm case and 665 mGy for a diagnostic angiography procedure. Strong correlations (shown as R{sup 2}) were observed between kerma-area-product and measured eye doses (LOC: 0.78, LE: 0.68). Lateral and frontal air-kerma showed strong correlations with measured dose at LOC (AK{sub L}: 0.93, AK{sub F}: 0.78) and a weak correlation with measured dose at LE. A moderate correlation was observed between fluoroscopic time and dose measured at LE and LOC regions. The MOSkin dose-monitoring system represents a new tool enabling real-time monitoring of eye lens dose during neuro-interventional procedures. This system can provide interventionalists with information needed to adjust the clinical procedure to control the patient's dose. (orig.)
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
Dose-response meta-analysis of differences in means
Alessio Crippa
2016-08-01
Full Text Available Abstract Background Meta-analytical methods are frequently used to combine dose-response findings expressed in terms of relative risks. However, no methodology has been established when results are summarized in terms of differences in means of quantitative outcomes. Methods We proposed a two-stage approach. A flexible dose-response model is estimated within each study (first stage taking into account the covariance of the data points (mean differences, standardized mean differences. Parameters describing the study-specific curves are then combined using a multivariate random-effects model (second stage to address heterogeneity across studies. Results The method is fairly general and can accommodate a variety of parametric functions. Compared to traditional non-linear models (e.g. E max, logistic, spline models do not assume any pre-specified dose-response curve. Spline models allow inclusion of studies with a small number of dose levels, and almost any shape, even non monotonic ones, can be estimated using only two parameters. We illustrated the method using dose-response data arising from five clinical trials on an antipsychotic drug, aripiprazole, and improvement in symptoms in shizoaffective patients. Using the Positive and Negative Syndrome Scale (PANSS, pooled results indicated a non-linear association with the maximum change in mean PANSS score equal to 10.40 (95 % confidence interval 7.48, 13.30 observed for 19.32 mg/day of aripiprazole. No substantial change in PANSS score was observed above this value. An estimated dose of 10.43 mg/day was found to produce 80 % of the maximum predicted response. Conclusion The described approach should be adopted to combine correlated differences in means of quantitative outcomes arising from multiple studies. Sensitivity analysis can be a useful tool to assess the robustness of the overall dose-response curve to different modelling strategies. A user-friendly R package has been developed to facilitate
Organ Doses and Effective Doses in Pediatric Radiography: Patient-Dose Survey in Finland
Kiljunen, T.; Tietaevaeinen, A.; Parviainen, T.; Viitala, A.; Kortesniemi, M. (Radiation Practices Regulation, Radiation and Nuclear Safety Authority, Helsinki (Finland))
2009-01-15
Background: Use of the effective dose in diagnostic radiology permits the radiation exposure of diverse diagnostic procedures to be quantified. Fundamental knowledge of patient doses enhances the implementation of the 'as low as reasonably achievable' (ALARA) principle. Purpose: To provide comparative information on pediatric examination protocols and patient doses in skull, sinus, chest, abdominal, and pelvic radiography examinations. Material and Methods: 24 Finnish hospitals were asked to register pediatric examination data, including patient information and examination parameters and specifications. The total number of examinations in the study was 1916 (1426 chest, 228 sinus, 96 abdominal, 94 skull, and 72 pelvic examinations). Entrance surface dose (ESD) and dose-area products (DAP) were calculated retrospectively or DAP meters were used. Organ doses and effective doses were determined using a Monte Carlo program (PCXMC). Results: There was considerable variation in examination protocols between different hospitals, indicating large variations in patient doses. Mean effective doses of different age groups ranged from 5 muSv to 14 muSv in skull and sinus examinations, from 25 muSv to 483 muSv in abdominal examinations, and from 6 muSv to 48 muSv in chest examinations. Conclusion: In chest and sinus examinations, the amount of data was extensive, allowing national pediatric diagnostic reference levels to be defined. Parameter selection in pediatric examination protocols should be harmonized in order to reduce patient doses and improve optimization
Sparks, R.B.; Stabin, M.G. [Oak Ridge Inst. for Science and Education, TN (United States)
1999-01-01
After administration of I-131 to the female patient, the possibility of radiation exposure of the embryo/fetus exists if the patient becomes pregnant while radioiodine remains in the body. Fetal radiation dose estimates for such cases were calculated. Doses were calculated for various maternal thyroid uptakes and time intervals between administration and conception, including euthyroid and hyperthyroid cases. The maximum fetal dose calculating was about 9.8E-03 mGy/MBq, which occurred with 100% maternal thyroid uptake and a 1 week interval between administration and conception. Placental crossover of the small amount of radioiodine remaining 90 days after conception was also considered. Such crossover could result in an additional fetal dose of 9.8E-05 mGy/MBq and a maximum fetal thyroid self dose of 3.5E-04 mGy/MBq.
Dose response problems in carcinogenesis.
Crump, K S
1979-03-01
The estimation of risks from exposure to carcinogens is an important problem from the viewpoint of protection of human health. It also poses some very difficult dose-response problems. Two dose-response models may fit experimental data about equally well and yet predict responses that differ by many orders of magnitude at low doses. Mechanisms of carcinogenesis are not sufficiently understood so that the shape of the dose-response curve at low doses can be satisfactorily predicted. Mathematical theories of carcinogenesis and statistical procedures can be of use with dose-reponse problems such as this and, in addition, can lead to a better understanding of the mechanisms of carcinogenesis. In this paper, mathematical dose-response models of carcinogenesis are considered as well as various proposed dose-response procedures for estimating carcinogenic risks at low doses. Areas are suggested in which further work may be useful. These areas include experimental design problems, statistical procedures for use with time-to-occurrence data, and mathematical models that incorporate such biological features as pharmacokinetics of carcinogens, synergistic effects, DNA repair, susceptible subpopulations, and immune reactions.
Implementation of spot scanning dose optimization and dose calculation for helium ions in Hyperion
Fuchs, Hermann, E-mail: hermann.fuchs@meduniwien.ac.at [Department of Radiation Oncology, Division of Medical Radiation Physics, Medical University of Vienna/AKH Vienna, Vienna 1090, Austria and Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Medical University of Vienna, Vienna 1090 (Austria); Alber, Markus [Department for Oncology, Aarhus University Hospital, Aarhus 8000 (Denmark); Schreiner, Thomas [PEG MedAustron, Wiener Neustadt 2700 (Austria); Georg, Dietmar [Department of Radiation Oncology, Division of Medical Radiation Physics, Medical University of Vienna/AKH Vienna, Vienna 1090 (Austria); Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Medical University of Vienna, Vienna 1090 (Austria); Comprehensive Cancer Center, Medical University of Vienna/AKH Vienna, Vienna 1090 (Austria)
2015-09-15
Purpose: Helium ions ({sup 4}He) may supplement current particle beam therapy strategies as they possess advantages in physical dose distribution over protons. To assess potential clinical advantages, a dose calculation module accounting for relative biological effectiveness (RBE) was developed and integrated into the treatment planning system Hyperion. Methods: Current knowledge on RBE of {sup 4}He together with linear energy transfer considerations motivated an empirical depth-dependent “zonal” RBE model. In the plateau region, a RBE of 1.0 was assumed, followed by an increasing RBE up to 2.8 at the Bragg-peak region, which was then kept constant over the fragmentation tail. To account for a variable proton RBE, the same model concept was also applied to protons with a maximum RBE of 1.6. Both RBE models were added to a previously developed pencil beam algorithm for physical dose calculation and included into the treatment planning system Hyperion. The implementation was validated against Monte Carlo simulations within a water phantom using γ-index evaluation. The potential benefits of {sup 4}He based treatment plans were explored in a preliminary treatment planning comparison (against protons) for four treatment sites, i.e., a prostate, a base-of-skull, a pediatric, and a head-and-neck tumor case. Separate treatment plans taking into account physical dose calculation only or using biological modeling were created for protons and {sup 4}He. Results: Comparison of Monte Carlo and Hyperion calculated doses resulted in a γ{sub mean} of 0.3, with 3.4% of the values above 1 and γ{sub 1%} of 1.5 and better. Treatment plan evaluation showed comparable planning target volume coverage for both particles, with slightly increased coverage for {sup 4}He. Organ at risk (OAR) doses were generally reduced using {sup 4}He, some by more than to 30%. Improvements of {sup 4}He over protons were more pronounced for treatment plans taking biological effects into account. All
Yasso, B; Li, Y; Alexander, A; Mel'nikova, N B; Mukhina, I V
2014-01-01
A comparison of the relative bioavailability and intensity of penetration of glucosamine sulfate in oral, injection and topical administration of the dosage form Hondroxid Maximum as a cream containing micellar system for transdermal delivery of glucosamine in the experiment by Sprague-Dawley rats was carried out. On the base on the pharmacokinetic profiles data of glucosamine in rat blood plasma with daily administration in 3 times a day for 1 week by cream Hondroxid Maximum 400 mg/kg and the single injection solution of 4% Glucosamine sulfate 400 mg/kg was found that the relative bioavailability was 61.6%. Calculated penetration rate of glucosamine in the plasma through the rats skin in 4 hours, equal to 26.9 μg/cm2 x h, and the penetration of glucosamine through the skin into the plasma after a single dose of cream in 4 hours was 4.12%. Comparative analysis of literature and experimental data and calculations based on them suggest that medicine Hondroxid Maximum, cream with transdermal glucosamine complex in the treatment in accordance with the instructions can provide an average concentration of glucosamine in the synovial fluid of an inflamed joint in the range (0.7 - 1.5) μg/ml, much higher than the concentration of endogenous glucosamine human synovial joint fluid (0.02 - 0.07 μg/ml). By theoretical calculations taking into account experimental data it is shown that the medicine Hondroxid Maximum can reach the bioavailability level of the modern injection forms and exceed the bioavailability level of modern oral forms of glucosamine up to 2 times.
Individual Module Maximum Power Point Tracking for Thermoelectric Generator Systems
Vadstrup, Casper; Schaltz, Erik; Chen, Min
2013-07-01
In a thermoelectric generator (TEG) system the DC/DC converter is under the control of a maximum power point tracker which ensures that the TEG system outputs the maximum possible power to the load. However, if the conditions, e.g., temperature, health, etc., of the TEG modules are different, each TEG module will not produce its maximum power. If each TEG module is controlled individually, each TEG module can be operated at its maximum power point and the TEG system output power will therefore be higher. In this work a power converter based on noninverting buck-boost converters capable of handling four TEG modules is presented. It is shown that, when each module in the TEG system is operated under individual maximum power point tracking, the system output power for this specific application can be increased by up to 8.4% relative to the situation when the modules are connected in series and 16.7% relative to the situation when the modules are connected in parallel.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
Predicting species' maximum dispersal distances from simple plant traits.
Tamme, Riin; Götzenberger, Lars; Zobel, Martin; Bullock, James M; Hooftman, Danny A P; Kaasik, Ants; Pärtel, Meelis
2014-02-01
Many studies have shown plant species' dispersal distances to be strongly related to life-history traits, but how well different traits can predict dispersal distances is not yet known. We used cross-validation techniques and a global data set (576 plant species) to measure the predictive power of simple plant traits to estimate species' maximum dispersal distances. Including dispersal syndrome (wind, animal, ant, ballistic, and no special syndrome), growth form (tree, shrub, herb), seed mass, seed release height, and terminal velocity in different combinations as explanatory variables we constructed models to explain variation in measured maximum dispersal distances and evaluated their power to predict maximum dispersal distances. Predictions are more accurate, but also limited to a particular set of species, if data on more specific traits, such as terminal velocity, are available. The best model (R2 = 0.60) included dispersal syndrome, growth form, and terminal velocity as fixed effects. Reasonable predictions of maximum dispersal distance (R2 = 0.53) are also possible when using only the simplest and most commonly measured traits; dispersal syndrome and growth form together with species taxonomy data. We provide a function (dispeRsal) to be run in the software package R. This enables researchers to estimate maximum dispersal distances with confidence intervals for plant species using measured traits as predictors. Easily obtainable trait data, such as dispersal syndrome (inferred from seed morphology) and growth form, enable predictions to be made for a large number of species.
Prediction of three dimensional maximum isometric neck strength.
Fice, Jason B; Siegmund, Gunter P; Blouin, Jean-Sébastien
2014-09-01
We measured maximum isometric neck strength under combinations of flexion/extension, lateral bending and axial rotation to determine whether neck strength in three dimensions (3D) can be predicted from principal axes strength. This would allow biomechanical modelers to validate their neck models across many directions using only principal axis strength data. Maximum isometric neck moments were measured in 9 male volunteers (29±9 years) for 17 directions. The 3D moments were normalized by the principal axis moments, and compared to unity for all directions tested. Finally, each subject's maximum principal axis moments were used to predict their resultant moment in the off-axis directions. Maximum moments were 30±6 N m in flexion, 32±9 N m in lateral bending, 51±11 N m in extension, and 13±5 N m in axial rotation. The normalized 3D moments were not significantly different from unity (95% confidence interval contained one), except for three directions that combined ipsilateral axial rotation and lateral bending; in these directions the normalized moments exceeded one. Predicted resultant moments compared well to the actual measured values (r2=0.88). Despite exceeding unity, the normalized moments were consistent across subjects to allow prediction of maximum 3D neck strength using principal axes neck strength.
Ultrafine Particles in Residential Indoors and Doses Deposited in the Human Respiratory System
Maurizio Manigrasso
2015-09-01
Full Text Available Indoor aerosol sources may significantly contribute to the daily dose of particles deposited into the human respiratory system. Therefore, it is important to characterize the aerosols deriving from the operations currently performed in an indoor environment and also to estimate the relevant particle respiratory doses. For this aim, aerosols from indoor combustive and non-combustive sources were characterized in terms of aerosol size distributions, and the relevant deposition doses were estimated as a function of time, particle diameter and deposition site in the respiratory system. Ultrafine particles almost entirely made up the doses estimated. The maximum contribution was due to particles deposited in the alveolar region between the 18th and the 21st airway generation. When cooking operations were performed, respiratory doses per unit time were about ten-fold higher than the relevant indoor background dose. Such doses were even higher than those associated with outdoor traffic aerosol.
Akahane, Keiichi; Yonai, Shunsuke; Fukuda, Shigekazu; Miyahara, Nobuyuki; Yasuda, Hiroshi; Iwaoka, Kazuki; Matsumoto, Masaki; Fukumura, Akifumi; Akashi, Makoto
2013-04-01
The great east Japan earthquake and subsequent tsunamis caused Fukushima Dai-ichi Nuclear Power Plant (NPP) accident. National Institute of Radiological Sciences (NIRS) developed the external dose estimation system for Fukushima residents. The system is being used in the Fukushima health management survey. The doses can be obtained by superimposing the behavior data of the residents on the dose rate maps. For grasping the doses, 18 evacuation patterns of the residents were assumed by considering the actual evacuation information before using the survey data. The doses of the residents from the deliberate evacuation area were relatively higher than those from the area within 20 km radius. The estimated doses varied from around 1 to 6 mSv for the residents evacuated from the representative places in the deliberate evacuation area. The maximum dose in 18 evacuation patterns was estimated to be 19 mSv.
刘雄; 田昌炳; 万英杰; 王方; 徐秋枫
2015-01-01
针对裂缝性致密油藏，基于拉普拉斯变换及Stehfest数值反演，建立了一种可用于直井体积改造产能评价的半解析两区复合模型，模型内区包含有限条无限导流水力裂缝，外区是经典的Warren-Root双重介质系统。并分析了导压系数比α、裂缝传导率比β、储容比ω、窜流系数λ以及改造半径reD等参数敏感性，绘制了产量递减图版。结果表明：从产量递减曲线，可以看出体积改造直井产能遵循“L”形递减规律；导压系数比越大，初期产量越高；裂缝传导率比对整个生产周期的产能都有影响，裂缝传导率比越大，单井产量越高，同时还会伴有明显的复合边界流特征；储容比和窜流系数分别影响双重介质基质－裂缝系统窜流发生的程度和时间；内区人工主裂缝条数越多或改造渗透率越大，复合边界流越明显，产能也越高，这说明改造程度与改造体积对产能的增加都很重要，应注意两者的合理优化。矿场实例验证了该模型在裂缝性封闭边界油藏直井多次压裂改造以及裂缝性致密油藏直井体积改造方面的适用性和实用性。%In this study,based on Laplace transformation and Stehfest numerical algorithm,a vertical well frac-ture network reconstruction semi-analytical deliverability evaluation model which can be used for fractured tight oil reservoirs was established.The proposed model is a composite system with two concentric regions.The inner region contains finite number of hydraulic fractures and the flow in each fracture is assumed linear.The outer region is modeled with the classical Warren-Root model and radial flow takes place in this region.The influ-ences of related parameters,such as diffusivity coefficient ratio,fracture conductivity ratio,storativity ratio, cross-flow factor and so forth,on the seepage flow were analyzed by using the established model.The results show that the
Radionuclide tumor therapy with ultrasound contrast microbubbles
Wamel, van Annemieke; Bouakaz, Ayache; Bernard, Bert; Cate, ten Folkert; Jong, de Nico
2004-01-01
Radionuclides have shown to be effective in tumour therapy. However, the side effects determine the maximum deliverable dose. Recently, it has been demonstrated that cells can be permeabilised through sonoporation using ultrasound and contrast microbubbles. The use of sonoporation in treatment of tu
Predicting Maximum Sunspot Number in Solar Cycle 24
Nipa J Bhatt; Rajmal Jain; Malini Aggarwal
2009-03-01
A few prediction methods have been developed based on the precursor technique which is found to be successful for forecasting the solar activity. Considering the geomagnetic activity aa indices during the descending phase of the preceding solar cycle as the precursor, we predict the maximum amplitude of annual mean sunspot number in cycle 24 to be 111 ± 21. This suggests that the maximum amplitude of the upcoming cycle 24 will be less than cycles 21–22. Further, we have estimated the annual mean geomagnetic activity aa index for the solar maximum year in cycle 24 to be 20.6 ± 4.7 and the average of the annual mean sunspot number during the descending phase of cycle 24 is estimated to be 48 ± 16.8.
Construction and enumeration of Boolean functions with maximum algebraic immunity
ZHANG WenYing; WU ChuanKun; LIU XiangZhong
2009-01-01
Algebraic immunity is a new cryptographic criterion proposed against algebraic attacks. In order to resist algebraic attacks, Boolean functions used in many stream ciphers should possess high algebraic immunity. This paper presents two main results to find balanced Boolean functions with maximum algebraic immunity. Through swapping the values of two bits, and then generalizing the result to swap some pairs of bits of the symmetric Boolean function constructed by Dalai, a new class of Boolean functions with maximum algebraic immunity are constructed. Enumeration of such functions is also given. For a given function p(x) with deg(p(x)) < [n/2], we give a method to construct functions in the form p(x)+q(x) which achieve the maximum algebraic immunity, where every term with nonzero coefficient in the ANF of q(x) has degree no less than [n/2].
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
Mass mortality of the vermetid gastropod Ceraesignum maximum
Brown, A. L.; Frazer, T. K.; Shima, J. S.; Osenberg, C. W.
2016-09-01
Ceraesignum maximum (G.B. Sowerby I, 1825), formerly Dendropoma maximum, was subject to a sudden, massive die-off in the Society Islands, French Polynesia, in 2015. On Mo'orea, where we have detailed documentation of the die-off, these gastropods were previously found in densities up to 165 m-2. In July 2015, we surveyed shallow back reefs of Mo'orea before, during and after the die-off, documenting their swift decline. All censused populations incurred 100% mortality. Additional surveys and observations from Mo'orea, Tahiti, Bora Bora, and Huahine (but not Taha'a) suggested a similar, and approximately simultaneous, die-off. The cause(s) of this cataclysmic mass mortality are currently unknown. Given the previously documented negative effects of C. maximum on corals, we expect the die-off will have cascading effects on the reef community.
The optimal polarizations for achieving maximum contrast in radar images
Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.
1988-01-01
There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
Influence of maximum decking charge on intensity of blasting vibration
无
2006-01-01
Based on the character of short-time non-stationary random signal, the relationship between the maximum decking charge and energy distribution of blasting vibration signals was investigated by means of the wavelet packet method. Firstly, the characteristics of wavelet transform and wavelet packet analysis were described. Secondly, the blasting vibration signals were analyzed by wavelet packet based on software MATLAB, and the change of energy distribution curve at different frequency bands were obtained. Finally, the law of energy distribution of blasting vibration signals changing with the maximum decking charge was analyzed. The results show that with the increase of decking charge, the ratio of the energy of high frequency to total energy decreases, the dominant frequency bands of blasting vibration signals tend towards low frequency and blasting vibration does not depend on the maximum decking charge.
The subsequence weight distribution of summed maximum length digital sequences
Weathers, G. D.; Graf, E. R.; Wallace, G. R.
1974-01-01
An attempt is made to develop mathematical formulas to provide the basis for the design of pseudorandom signals intended for applications requiring accurate knowledge of the statistics of the signals. The analysis approach involves calculating the first five central moments of the weight distribution of subsequences of hybrid-sum sequences. The hybrid-sum sequence is formed from the modulo-two sum of k maximum length sequences and is an extension of the sum sequences formed from two maximum length sequences that Gilson (1966) evaluated. The weight distribution of the subsequences serves as an approximation to the filtering process. The basic reason for the analysis of hybrid-sum sequences is to establish a large group of sequences with good statistical properties. It is shown that this can be accomplished much more efficiently using the hybrid-sum approach rather than forming the group strictly from maximum length sequences.
Maximum power point tracking for optimizing energy harvesting process
Akbari, S.; Thang, P. C.; Veselov, D. S.
2016-10-01
There has been a growing interest in using energy harvesting techniques for powering wireless sensor networks. The reason for utilizing this technology can be explained by the sensors limited amount of operation time which results from the finite capacity of batteries and the need for having a stable power supply in some applications. Energy can be harvested from the sun, wind, vibration, heat, etc. It is reasonable to develop multisource energy harvesting platforms for increasing the amount of harvesting energy and to mitigate the issue concerning the intermittent nature of ambient sources. In the context of solar energy harvesting, it is possible to develop algorithms for finding the optimal operation point of solar panels at which maximum power is generated. These algorithms are known as maximum power point tracking techniques. In this article, we review the concept of maximum power point tracking and provide an overview of the research conducted in this area for wireless sensor networks applications.
Proscriptive Bayesian Programming and Maximum Entropy: a Preliminary Study
Koike, Carla Cavalcante
2008-11-01
Some problems found in robotics systems, as avoiding obstacles, can be better described using proscriptive commands, where only prohibited actions are indicated in contrast to prescriptive situations, which demands that a specific command be specified. An interesting question arises regarding the possibility to learn automatically if proscriptive commands are suitable and which parametric function could be better applied. Lately, a great variety of problems in robotics domain are object of researches using probabilistic methods, including the use of Maximum Entropy in automatic learning for robot control systems. This works presents a preliminary study on automatic learning of proscriptive robot control using maximum entropy and using Bayesian Programming. It is verified whether Maximum entropy and related methods can favour proscriptive commands in an obstacle avoidance task executed by a mobile robot.
Multitime maximum principle approach of minimal submanifolds and harmonic maps
Udriste, Constantin
2011-01-01
Some optimization problems coming from the Differential Geometry, as for example, the minimal submanifolds problem and the harmonic maps problem are solved here via interior solutions of appropriate multitime optimal control problems. Section 1 underlines some science domains where appear multitime optimal control problems. Section 2 (Section 3) recalls the multitime maximum principle for optimal control problems with multiple (curvilinear) integral cost functionals and $m$-flow type constraint evolution. Section 4 shows that there exists a multitime maximum principle approach of multitime variational calculus. Section 5 (Section 6) proves that the minimal submanifolds (harmonic maps) are optimal solutions of multitime evolution PDEs in an appropriate multitime optimal control problem. Section 7 uses the multitime maximum principle to show that of all solids having a given surface area, the sphere is the one having the greatest volume. Section 8 studies the minimal area of a multitime linear flow as optimal c...
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Approximate maximum-entropy moment closures for gas dynamics
McDonald, James G.
2016-11-01
Accurate prediction of flows that exist between the traditional continuum regime and the free-molecular regime have proven difficult to obtain. Current methods are either inaccurate in this regime or prohibitively expensive for practical problems. Moment closures have long held the promise of providing new, affordable, accurate methods in this regime. The maximum-entropy hierarchy of closures seems to offer particularly attractive physical and mathematical properties. Unfortunately, several difficulties render the practical implementation of maximum-entropy closures very difficult. This work examines the use of simple approximations to these maximum-entropy closures and shows that physical accuracy that is vastly improved over continuum methods can be obtained without a significant increase in computational cost. Initially the technique is demonstrated for a simple one-dimensional gas. It is then extended to the full three-dimensional setting. The resulting moment equations are used for the numerical solution of shock-wave profiles with promising results.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Remarks on the strong maximum principle for nonlocal operators
Jerome Coville
2008-05-01
Full Text Available In this note, we study the existence of a strong maximum principle for the nonlocal operator $$ mathcal{M}[u](x :=int_{G}J(gu(x*g^{-1}dmu(g - u(x, $$ where $G$ is a topological group acting continuously on a Hausdorff space $X$ and $u in C(X$. First we investigate the general situation and derive a pre-maximum principle. Then we restrict our analysis to the case of homogeneous spaces (i.e., $ X=G /H$. For such Hausdorff spaces, depending on the topology, we give a condition on $J$ such that a strong maximum principle holds for $mathcal{M}$. We also revisit the classical case of the convolution operator (i.e. $G=(mathbb{R}^n,+, X=mathbb{R}^n, dmu =dy$.
Resource-constrained maximum network throughput on space networks
Yanling Xing; Ning Ge; Youzheng Wang
2015-01-01
This paper investigates the maximum network through-put for resource-constrained space networks based on the delay and disruption-tolerant networking (DTN) architecture. Specifical y, this paper proposes a methodology for calculating the maximum network throughput of multiple transmission tasks under storage and delay constraints over a space network. A mixed-integer linear programming (MILP) is formulated to solve this problem. Simula-tions results show that the proposed methodology can successful y calculate the optimal throughput of a space network under storage and delay constraints, as wel as a clear, monotonic relationship between end-to-end delay and the maximum network throughput under storage constraints. At the same time, the optimization re-sults shine light on the routing and transport protocol design in space communication, which can be used to obtain the optimal network throughput.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Quality, precision and accuracy of the maximum No. 40 anemometer
Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)
1996-12-31
This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.
The evolution of maximum body size of terrestrial mammals.
Smith, Felisa A; Boyer, Alison G; Brown, James H; Costa, Daniel P; Dayan, Tamar; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; McCain, Christy; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D
2010-11-26
The extinction of dinosaurs at the Cretaceous/Paleogene (K/Pg) boundary was the seminal event that opened the door for the subsequent diversification of terrestrial mammals. Our compilation of maximum body size at the ordinal level by sub-epoch shows a near-exponential increase after the K/Pg. On each continent, the maximum size of mammals leveled off after 40 million years ago and thereafter remained approximately constant. There was remarkable congruence in the rate, trajectory, and upper limit across continents, orders, and trophic guilds, despite differences in geological and climatic history, turnover of lineages, and ecological variation. Our analysis suggests that although the primary driver for the evolution of giant mammals was diversification to fill ecological niches, environmental temperature and land area may have ultimately constrained the maximum size achieved.
The maximum force in a column under constant speed compression
Kuzkin, Vitaly A
2015-01-01
Dynamic buckling of an elastic column under compression at constant speed is investigated assuming the first-mode buckling. Two cases are considered: (i) an imperfect column (Hoff's statement), and (ii) a perfect column having an initial lateral deflection. The range of parameters, where the maximum load supported by a column exceeds Euler static force is determined. In this range, the maximum load is represented as a function of the compression rate, slenderness ratio, and imperfection/initial deflection. Considering the results we answer the following question: "How slowly the column should be compressed in order to measure static load-bearing capacity?" This question is important for the proper setup of laboratory experiments and computer simulations of buckling. Additionally, it is shown that the behavior of a perfect column having an initial deflection differ significantlys form the behavior of an imperfect column. In particular, the dependence of the maximum force on the compression rate is non-monotoni...
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
Variations in skin dose associated with linac bed material at 6 MV x-ray energy
Butson, Martin J. [Department of Physics and Materials Science, City University of Hong Kong, Kowloon Tong, Hong Kong (China) and Illawarra Cancer Care Centre, Department of Medical Physics, Wollongong, NSW (Australia)]. E-mail: mbutson@guessmail.com; Cheung Tsang; Yu, Peter K.N. [Department of Physics and Materials Science, City University of Hong Kong, Kowloon Tong, Hong Kong (China); Webb, B. [Illawarra Cancer Care Centre, Department of Medical Physics, Wollongong, NSW (Australia)
2002-01-07
Treatment with radiotherapy x-rays at 6 MV energy produces a build-up effect whereby a smaller dose is delivered to the patient's skin compared to the tumour dose. With anterior fields, no material is normally placed over the patient's skin, thus providing the maximum skin sparing possible with the beam configuration used. A posterior beam normally passes through the treatment couch top and increases the dose delivered to the patient's skin. Both the Mylar sheeting and the support ribbing material produce a significant increase in skin dose. Measurements at 6 MV have shown that the basal cell layer dose can be increased by up to 51% of maximum dose with a carbon fibre/Mylar couch and by 28% for a tennis string/Mylar couch when compared to anterior beams. These values are associated with the position of the carbon fibre or tennis string ribbing. Dermal layer doses are increased by up to 30 and 24% of maximum dose for carbon fibre and tennis string, respectively. These values include a combination of dose due to the support ribbing and the Mylar sheeting. Due to the variability in patient positioning on the couch top, these increases would be spread out over the skin surface producing an average increase per unit area at the basal layer of up to 32 and 20% of the maximum, respectively, for carbon fibre and tennis string couch tops and 21 and 12% at the dermal layer compared to dose at D{sub max}. (author)
Estimating the maximum potential revenue for grid connected electricity storage :
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01
The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
Dose to craniofacial region through portal imaging of pediatric brain tumors.
Hitchen, Christine J; Osa, Etin-Osa; Dewyngaert, J Keith; Chang, Jenghwa; Narayana, Ashwatha
2012-01-05
The purpose of this study was to determine dose to the planning target volume (PTV) and organs at risk (OARs) from portal imaging (PI) of the craniofacial region in pediatric brain tumor patients treated with intensity-modulated radiation therapy (IMRT). Twenty pediatric brain tumor patients were retrospectively studied. Each received portal imaging of treatment fields and orthogonal setup fields in the craniofacial region. The number of PI and monitor units used for PI were documented for each patient. Dose distributions and dose-volume histograms were generated to quantify the maximum, minimum, and mean dose to the PTV, and the mean dose to OARs through PI acquisition. The doses resulting from PI are reported as percentage of prescribed dose. The average maximum, minimum, and mean doses to PTV from PI were 2.9 ± 0.7%, 2.2 ± 1.0%, and 2.5 ± 0.7%, respectively. The mean dose to the OARs from PI were brainstem 2.8 ± 1.1%, optic nerves/chiasm 2.6 ± 0.9%, cochlea 2.6 ± 0.9%, hypothalamus/pituitary 2.4 ± 0.6%, temporal lobes 2.3 ± 0.6%, thyroid 1.6 ± 0.8%, and eyes 2.6 ± 0.9%. The mean number of portal images and the mean number of PI monitor units per patient were 58.8 and 173.3, respectively. The dose from PI while treating pediatric brain tumors using IMRT is significant (2%-3% of the prescribed dose). This may result in exceeding the tolerance limit of many critical structures and lead to unwanted late complications and secondary malignancies. Dose contributions from PI should be considered in the final documented dose. Attempts must be made in PI practices to lower the imaging dose when feasible.
Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules
Gao, Junling; Chen, Min
2013-01-01
Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...
Microcanonical origin of the maximum entropy principle for open systems.
Lee, Julian; Pressé, Steve
2012-10-01
There are two distinct approaches for deriving the canonical ensemble. The canonical ensemble either follows as a special limit of the microcanonical ensemble or alternatively follows from the maximum entropy principle. We show the equivalence of these two approaches by applying the maximum entropy formulation to a closed universe consisting of an open system plus bath. We show that the target function for deriving the canonical distribution emerges as a natural consequence of partial maximization of the entropy over the bath degrees of freedom alone. By extending this mathematical formalism to dynamical paths rather than equilibrium ensembles, the result provides an alternative justification for the principle of path entropy maximization as well.
Information Entropy Production of Spatio-Temporal Maximum Entropy Distributions
Cofre, Rodrigo
2015-01-01
Spiking activity from populations of neurons display causal interactions and memory effects. Therefore, they are expected to show some degree of irreversibility in time. Motivated by the spike train statistics, in this paper we build a framework to quantify the degree of irreversibility of any maximum entropy distribution. Our approach is based on the transfer matrix technique, which enables us to find an homogeneous irreducible Markov chain that shares the same maximum entropy measure. We provide relevant examples in the context of spike train statistics
Semiparametric maximum likelihood for nonlinear regression with measurement errors.
Suh, Eun-Young; Schafer, Daniel W
2002-06-01
This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.
Maximum length scale in density based topology optimization
Lazarov, Boyan Stefanov; Wang, Fengwen
2017-01-01
The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...
On the Effect of Mortgages of Maximum Amount
YangZongping
2005-01-01
Since the enactment of the PRC Guarantee Law, mortgages of maximum amount has won wide application in a variety of business occupations and particularly in banking. Compared with the rich content of the 21clause statute on mortgages of maximum amount in Japan's Civil Law, the Chinese law has only four principled clauses. Its lack of operability plus its legislative gaps and defects has a severe impact on the positive effectiveness of the law. The core issue is the question of effectiveness. Because the principles stipulated in the Law run counter to the diversity of its actual practices,
A Maximum Entropy Method for a Robust Portfolio Problem
Yingying Xu
2014-06-01
Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.