WorldWideScience

Sample records for minimal dead time

  1. Simulating detectors dead time

    International Nuclear Information System (INIS)

    Rustom, Ibrahim Farog Ibrahim

    2015-06-01

    Nuclear detectors are used in all aspects of nuclear measurements. All nuclear detectors are characterized by their dead time i.e. the time needed by a detector to recover from a previous incident. A detector dead time influences measurements taken by a detector and specially when measuring high decay rate (>) where is the detector dead time. Two models are usually used to correct for the dead time effect: the paralayzable and the non-paralayzable models. In the current work we use Monte Carlo simulation techniques to simulate radioactivity and the effect of dead time and the count rate of a detector with a dead time =5x10 - 5s assuming the non-paralayzable model. The simulation indicates that assuming a non -paralayzable model could be used to correct for decay rate measured by a detector. The reliability of the non-paralayzable model to correct the measured decay rate could be gauged using the Monte Carlo simulation. (Author)

  2. Determination of detection equipment dead time

    International Nuclear Information System (INIS)

    Sacha, J.

    1980-01-01

    A method is described of determining dead time by short-lived source measurement. It is based on measuring the sample count rates in different time intervals when only dead time correction is changed with the changing count of recorded pulses. The dead time may be determined from the measured values by a numerical-graphical method. The method is described. The advantage of the method is the minimization of errors and inaccuracies; the disadvantage is that the half-life of the source used should very accurately be known. (J.P.)

  3. A system for accurate and automated injection of hyperpolarized substrate with minimal dead time and scalable volumes over a large range

    Science.gov (United States)

    Reynolds, Steven; Bucur, Adriana; Port, Michael; Alizadeh, Tooba; Kazan, Samira M.; Tozer, Gillian M.; Paley, Martyn N. J.

    2014-02-01

    Over recent years hyperpolarization by dissolution dynamic nuclear polarization has become an established technique for studying metabolism in vivo in animal models. Temporal signal plots obtained from the injected metabolite and daughter products, e.g. pyruvate and lactate, can be fitted to compartmental models to estimate kinetic rate constants. Modeling and physiological parameter estimation can be made more robust by consistent and reproducible injections through automation. An injection system previously developed by us was limited in the injectable volume to between 0.6 and 2.4 ml and injection was delayed due to a required syringe filling step. An improved MR-compatible injector system has been developed that measures the pH of injected substrate, uses flow control to reduce dead volume within the injection cannula and can be operated over a larger volume range. The delay time to injection has been minimized by removing the syringe filling step by use of a peristaltic pump. For 100 μl to 10.000 ml, the volume range typically used for mice to rabbits, the average delivered volume was 97.8% of the demand volume. The standard deviation of delivered volumes was 7 μl for 100 μl and 20 μl for 10.000 ml demand volumes (mean S.D. was 9 ul in this range). In three repeat injections through a fixed 0.96 mm O.D. tube the coefficient of variation for the area under the curve was 2%. For in vivo injections of hyperpolarized pyruvate in tumor-bearing rats, signal was first detected in the input femoral vein cannula at 3-4 s post-injection trigger signal and at 9-12 s in tumor tissue. The pH of the injected pyruvate was 7.1 ± 0.3 (mean ± S.D., n = 10). For small injection volumes, e.g. less than 100 μl, the internal diameter of the tubing contained within the peristaltic pump could be reduced to improve accuracy. Larger injection volumes are limited only by the size of the receiving vessel connected to the pump.

  4. Dead time of dual detector tools

    International Nuclear Information System (INIS)

    Czubek, J.A.

    1994-01-01

    A theory of the dead time for the dual detector nuclear tool with the analogue signal transmission is given in the paper. At least two different times exist in such tools: the dead time of detectors (for final computation they assumed identical to each other) and the dead time of the signal transmission set-up. A method of two radioactive sources is proposed to measure these two different dead times. When the times used for measuring every countrate needed in the dead time determination algorithm are taken into account, the statistical accuracy of the dead time determination can be obtained. These estimations are performed by the computer simulation method. Two codes have been designed: DEADT2D (DEAD Time for 2 Detectors) and DEADT2DS (DEAD Time for 2 Detectors with Statistics). The first code calculates the dead time based on the recorded countrates only, the second is doing a 'simulation job' and provides information on the statistical distribution of the observed dead times. The theory and the numerical solutions were checked both by the simulation calculations and by the experiments performed with the ODSN-102 tool (the experiments were performed by T. Zorski). (Author)

  5. Coincidence-counting corrections for accidental coincidences, set dead time and intrinsic dead time

    International Nuclear Information System (INIS)

    Wyllie, H.A.

    1998-01-01

    An equation is derived for calculating the radioactivity of a source from the results of coincidence counting, taking into account dead-time losses and accidental coincidences. The corrections allow for the extension of the set dead time in the p channel by the intrinsic dead time. Experimental verification shows improvement over a previous equation. (author)

  6. Another method of dead time correction

    International Nuclear Information System (INIS)

    Sabol, J.

    1988-01-01

    A new method of the correction of counting losses caused by a non-extended dead time of pulse detection systems is presented. The approach is based on the distribution of time intervals between pulses at the output of the system. The method was verified both experimentally and by using the Monte Carlo simulations. The results show that the suggested technique is more reliable and accurate than other methods based on a separate measurement of the dead time. (author) 5 refs

  7. Measurement of the Dead-Time in a Multichannel Analyser

    DEFF Research Database (Denmark)

    Mortensen, L.; Olsen, J.

    1973-01-01

    By means of two simple measurements three different dead-times are determined: the normal dead-time, a dead-time coming from the pile-up, and a dead-time due to the finite width of the timing pulses.......By means of two simple measurements three different dead-times are determined: the normal dead-time, a dead-time coming from the pile-up, and a dead-time due to the finite width of the timing pulses....

  8. Bibliography on dead-time effects

    International Nuclear Information System (INIS)

    1975-01-01

    A BIPM (Bureau International des Poids et Mesures) Working Party has assembled a bibliography of the publications dealing with the measurement of dead times, the evaluation of the corresponding corrections and of other closely related subjects. It contains some 350 references, each of which is given with its full title; an author index is added. The search has been stopped in August 1975

  9. Tunnel Diode Discriminator with Fixed Dead Time

    DEFF Research Database (Denmark)

    Diamond, J. M.

    1965-01-01

    A solid state discriminator for the range 0.4 to 10 V is described. Tunnel diodes are used for the discriminator element and in a special fixed dead time circuit. An analysis of temperature stability is presented. The regulated power supplies are described, including a special negative resistance...

  10. Dead Time in the LAr Calorimeter Front-End Readout

    CERN Document Server

    Gingrich, D M

    2002-01-01

    We present readout time, latency, buffering, and dead-time calculations for the switched capacitor array controllers of the LAr calorimeter. The dead time is compared with algorithms for the dead-time generation in the level-1 central trigger processor.

  11. Effect of counting system dead time on thyroid uptake measurements

    International Nuclear Information System (INIS)

    Simpkin, D.J.

    1984-01-01

    Equations are derived and the results of numerical calculations shown that illustrate the effect of counting system dead time on measured thyroid uptake of radioiodine. It is predicted that the observed uptake is higher than the true uptake due to system dead time. This is shown for both paralyzing and nonparalyzing dead time. The effect of increasing the administered activity is shown to increase the measured uptake, in a manner predicted by the paralyzable and nonparalyzable dead time models

  12. Dead time effects in laser Doppler anemometry measurements

    DEFF Research Database (Denmark)

    Velte, Clara Marika; Buchhave, Preben; George, William K.

    2014-01-01

    frequency range, starting around the cutoff frequency due to the finite size of the MV. Using computer-generated data mimicking the LDA data, these effects have previously been shown to appear due to the effect of dead time, i.e., the finite time during which the system is not able to acquire new...... measurements. These dead times can be traced back to the fact that the burst-mode LDA cannot measure more than one signal burst at a time. Since the dead time is approximately equal to the residence time for a particle traversing a measurement volume, we are dealing with widely varying dead times, which...

  13. A new G-M counter dead time model

    International Nuclear Information System (INIS)

    Lee, S.H.; Gardner, R.P.

    2000-01-01

    A hybrid G-M counter dead time model was derived by combining the idealized paralyzable and non-paralyzable models. The new model involves two parameters, which are the paralyzable and non-paralyzable dead times. The dead times used in the model are very closely related to the physical dead time of the G-M tube and its resolving time. To check the validity of the model, the decaying source method with 56 Mn was used. The corrected counting rates by the new G-M dead time model were compared with the observed counting rates obtained from the measurement and gave very good agreement within 5% up to 7x10 4 counts/s for a G-M tube with a dead time of about 300 μs

  14. Cascades of pile-up and dead time

    International Nuclear Information System (INIS)

    Pomme, S.

    2008-01-01

    Count loss through a cascade of pile-up and dead time is studied. Time interval density-distribution functions and throughput factors are presented for counters with a series arrangement of pile-up and extending or non-extending dead time. A counter is considered, where an artificial dead time is imposed on every counted event, in order to control the length and type of dead time. For such a system, it is relatively easy to determine an average count-loss correction factor via a live-time clock gated by the imposed dead-time signal ('live-time mode'), or otherwise to apply a correction factor based on the inversion of the throughput function ('real-time mode'). However, these techniques do not account for additional loss through pulse pile-up. In this work, counting errors associated with neglecting cascade effects are calculated for measurements in live-time and real-time mode

  15. Dead time corrections using the backward extrapolation method

    Energy Technology Data Exchange (ETDEWEB)

    Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Dubi, C. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel); Geslot, B.; Blaise, P. [DEN/CAD/DER/SPEx/LPE, CEA Cadarache, Saint-Paul-les-Durance 13108 (France); Kolin, A. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel)

    2017-05-11

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1–2%) in restoring the corrected count rate. - Highlights: • A new method for dead time corrections is introduced and experimentally validated. • The method does not depend on any prior calibration nor assumes any specific model. • Different dead times are imposed on the signal and the losses are extrapolated to zero. • The method is implemented and validated using neutron measurements from the MINERVE. • Result show very good correspondence to empirical results.

  16. Simulation of Simple Controlled Processes with Dead-Time.

    Science.gov (United States)

    Watson, Keith R.; And Others

    1985-01-01

    The determination of closed-loop response of processes containing dead-time is typically not covered in undergraduate process control, possibly because the solution by Laplace transforms requires the use of Pade approximation for dead-time, which makes the procedure lengthy and tedious. A computer-aided method is described which simplifies the…

  17. Analysis of neutron multiplicity measurements with allowance for dead-time losses between time-correlated detections

    International Nuclear Information System (INIS)

    Vincent, C.H.

    1992-01-01

    An exact solution is found for dead-time losses between detections occurring within a gate interval, with constant dead time and with allowance for time correlation between detections from the same spontaneous initial event. This is used to obtain a close approximation to the losses with a multi-channel detection system, with allowance for dead times briding the gate opening. This is applied, inversely, to calculate the true detection multiplicity rates from the distribution of the recorded counts within that interval. A suggestion is made for a circuit change to give a major reduction in dead-time effects. The unavoidable statistical errors that would remain are calculated. Their minimization and the limits of such minimization are discussed. (orig.)

  18. Experimental dead-time distortions of Poisson processes

    International Nuclear Information System (INIS)

    Faraci, G.; Pennisi, A.R.; Consiglio Nazionale delle Ricerche, Catania

    1983-01-01

    In order to check the distortions, introduced by a non-extended dead time on the Poisson statistics, accurate experiments have been made in single channel counting. At a given measuring time, the dependence on the choice of the time origin and on the width of the dead time has been verified. An excellent agreement has been found between the theoretical expressions and the experimental curves. (orig.)

  19. Dead-Time Generation in Six-Phase Frequency Inverter

    Directory of Open Access Journals (Sweden)

    Aurelijus Pitrėnas

    2016-06-01

    Full Text Available In this paper control of multi-phase induction drives is discussed. Structure of six-phase frequency inverter is examined. The article deals with dead-time generation circuits in six-phase frequency inverter for transistor control signals. Computer models of dead-time circuits is created using LTspice software package. Simulation results are compared with experimental results of the tested dead-time circuits. Parameters obtained in simulation results are close to the parameters obtained in experimental results.

  20. Optimal linear filtering of Poisson process with dead time

    International Nuclear Information System (INIS)

    Glukhova, E.V.

    1993-01-01

    The paper presents a derivation of an integral equation defining the impulsed transient of optimum linear filtering for evaluation of the intensity of the fluctuating Poisson process with allowance for dead time of transducers

  1. Accuracy in gamma spectrometry: Pileup, dead time, and fast electornics

    International Nuclear Information System (INIS)

    Lindstrom, R.M.

    1993-01-01

    An important source of inaccuracy in neutron activation analysis is the nonlinear throughput of the counting system, especially at high counting rates. Losses, due to the finite time needed for events to happen, occur in all parts of the spectrometer system: the germanium detector crystal, preamplifier, amplifier, analog-digital converter (ADC), and MCA or computer. The slowest unbuffered units are the ADC and the amplifier, followed by the crystal. Even with modern fast electronics, losses can be important, although compensating circuits can greatly improve accuracy if they are used correctly. The ADC dead time is less of a problem than it was a decade ago. For example, a modern successive-approximation ADC in the author's laboratory takes 6 μs to digitize a gamma ray in the middle of an 8192-channel spectrum, compared with 60 μs for the Wilkinson device that it replaced. Dead-time circuits in MCAs for many years have compensated very well for this dead time. Pulse pileup is as important as ADC dead time. Random coincidence, the accidental arrival of the signal from two nonrelated gamma rays at the amplifier in a time short compared to the shaping time, results in a composite pulse that distorts the spectrum. For accurate spectrometry, each such random-sum pulse should be excluded from the spectrum (pileup rejection), and the system dead time must be adjusted to compensate for the time the system is busy analyzing this rejected event (pileup live-time correction)

  2. Multi-Rate Acquisition for Dead Time Reduction in Magnetic Resonance Receivers: Application to Imaging With Zero Echo Time.

    Science.gov (United States)

    Marjanovic, Josip; Weiger, Markus; Reber, Jonas; Brunner, David O; Dietrich, Benjamin E; Wilm, Bertram J; Froidevaux, Romain; Pruessmann, Klaas P

    2018-02-01

    For magnetic resonance imaging of tissues with very short transverse relaxation times, radio-frequency excitation must be immediately followed by data acquisition with fast spatial encoding. In zero-echo-time (ZTE) imaging, excitation is performed while the readout gradient is already on, causing data loss due to an initial dead time. One major dead time contribution is the settling time of the filters involved in signal down-conversion. In this paper, a multi-rate acquisition scheme is proposed to minimize dead time due to filtering. Short filters and high output bandwidth are used initially to minimize settling time. With increasing time since the signal onset, longer filters with better frequency selectivity enable stronger signal decimation. In this way, significant dead time reduction is accomplished at only a slight increase in the overall amount of output data. Multi-rate acquisition was implemented with a two-stage filter cascade in a digital receiver based on a field-programmable gate array. In ZTE imaging in a phantom and in vivo, dead time reduction by multi-rate acquisition is shown to improve image quality and expand the feasible bandwidth while increasing the amount of data collected by only a few percent.

  3. Minimization of Dead-Periods in MRI Pulse Sequences for Imaging Oblique Planes

    Science.gov (United States)

    Atalar, Ergin; McVeigh, Elliot R.

    2007-01-01

    With the advent of breath-hold MR cardiac imaging techniques, the minimization of TR and TE for oblique planes has become a critical issue. The slew rates and maximum currents of gradient amplifiers limit the minimum possible TR and TE by adding dead-periods to the pulse sequences. We propose a method of designing gradient waveforms that will be applied to the amplifiers instead of the slice, readout, and phase encoding waveforms. Because this method ensures that the gradient amplifiers will always switch at their maximum slew rate, it results in the minimum possible dead-period for given imaging parameters and scan plane position. A GRASS pulse sequence has been designed and ultra-short TR and TE values have been obtained with standard gradient amplifiers and coils. For some oblique slices, we have achieved shorter TR and TE values than those for nonoblique slices. PMID:7869900

  4. Resonant power converter comprising adaptive dead-time control

    DEFF Research Database (Denmark)

    2017-01-01

    The invention relates in a first aspect to a resonant power converter comprising: a first power supply rail for receipt of a positive DC supply voltage and a second power supply rail for receipt of a negative DC supply voltage. The resonant power converter comprises a resonant network with an input...... terminal for receipt of a resonant input voltage from a driver circuit. The driver circuit is configured for alternatingly pulling the resonant input voltage towards the positive and negative DC supply voltages via first and second semiconductor switches, respectively, separated by intervening dead......-time periods in accordance with one or more driver control signals. A dead-time controller is configured to adaptively adjusting the dead-time periods based on the resonant input voltage....

  5. Once upon a time... Dead wood in french forests

    International Nuclear Information System (INIS)

    Bartoli, Michel; Geny, Bernard

    2005-01-01

    For many centuries in France, dead wood was an essential source of energy for households. Harvesting dead wood was both authorised - in particular, through allocation of rights of use - and highly regulated. Restrictions on its employment were established by texts ranging from the 1515 royal decree to an implementation decree of 1853 that is still applicable today - its owner must have formally released the wood. It must be dry and lying on the ground. It can be broken only by hand and no means other than human labour can be used to transport it. Furthermore, it cannot be the outcome of an act that caused a stem to dry while standing. In the 19. century, the huge number of trials, some of which went as far as the supreme court, shows just how important dead wood was socially, and much coveted by the paupers who were confronted with increasingly repressive forest police. These trials provide an excellent reflection of a society that harvested all the proceeds of felling. From the end of the 18. century to the middle of the 20., forestry treatises always dealt with removal of dead trees as a priority. Dead wood was for a long time and up to very recently abhorred but latterly has begun to be considered as an important compartment of biodiversity. History shows that it is no surprise that for the time being there is little of it to be found in our forests. (authors)

  6. A method for the measurement of the intrinsic dead time of a counting system

    International Nuclear Information System (INIS)

    Wyllie, H.A.

    1989-01-01

    Equations are derived for (a) the determination of the intrinsic dead time of a counting system in the components preceding the paralysis unit which imposes the set dead time, and (b) a more accurate correction of count rates in a single-channel system, taking into account the extension of the set dead time by the intrinsic dead time. (author)

  7. Homer and the cult of the dead in Helladic times

    Directory of Open Access Journals (Sweden)

    Odysseus Tsagarakis

    1980-12-01

    Full Text Available This paper discusses the Homeric bothros (Odyssey X 517 ff. as a possible source of information for the ritual and function of various bothroi (grave pits which are considered to be an important archaeological source. It seems that the bothroi were, by their nature, best suited to a cult of the dead and served as altars. The paper also discusses the possible reasons for the existence of the cult and argues against the view that fear of the dead motivated the cult in Helladic times.

  8. Coincidence counting corrections for dead time losses and accidental coincidences

    International Nuclear Information System (INIS)

    Wyllie, H.A.

    1987-04-01

    An equation is derived for the calculation of the radioactivity of a source from the results of coincidence counting taking into account the dead-time losses and accidental coincidences. The derivation is an extension of the method of J. Bryant [Int. J. Appl. Radiat. Isot., 14:143, 1963]. The improvement on Bryant's formula has been verified by experiment

  9. Solution of the Markov chain for the dead time problem

    International Nuclear Information System (INIS)

    Degweker, S.B.

    1997-01-01

    A method for solving the equation for the Markov chain, describing the effect of a non-extendible dead time on the statistics of time correlated pulses, is discussed. The equation, which was derived in an earlier paper, describes a non-linear process and is not amenable to exact solution. The present method consists of representing the probability generating function as a factorial cumulant expansion and neglecting factorial cumulants beyond the second. This results in a closed set of non-linear equations for the factorial moments. Stationary solutions of these equations, which are of interest for calculating the count rate, are obtained iteratively. The method is applied to the variable dead time counter technique for estimation of system parameters in passive neutron assay of Pu and reactor noise analysis. Comparisons of results by this method with Monte Carlo calculations are presented. (author)

  10. Is it practical to use the gamma camera dead time

    International Nuclear Information System (INIS)

    Morin, P.P.; Morin, J.F.; Caroff, J.; Lahellec, M.; Savina, A.

    1975-01-01

    The linearity of gamma camera counting is an essential feature for users engaged in quantitative dynamic studies. Instead of defining this quality by the usual dead time, the disadvantages of which are reported, it is proposed to use the experimental count rate giving 10% loss. It is shown that by proceeding in this way all ambiguity would be abolished, where both the counting linearity itself and its relation to sensitivity are concerned [fr

  11. Variable dead time counters: 2. A computer simulation

    International Nuclear Information System (INIS)

    Hooton, B.W.; Lees, E.W.

    1980-09-01

    A computer model has been developed to give a pulse train which simulates that generated by a variable dead time counter (VDC) used in safeguards determination of Pu mass. The model is applied to two algorithms generally used for VDC analysis. It is used to determine their limitations at high counting rates and to investigate the effects of random neutrons from (α,n) reactions. Both algorithms are found to be deficient for use with masses of 240 Pu greater than 100g and one commonly used algorithm is shown, by use of the model and also by theory, to yield a result which is dependent on the random neutron intensity. (author)

  12. Construction schedules slack time minimizing

    Science.gov (United States)

    Krzemiński, Michał

    2017-07-01

    The article presents two copyright models for minimizing downtime working brigades. Models have been developed for construction schedules performed using the method of work uniform. Application of flow shop models is possible and useful for the implementation of large objects, which can be divided into plots. The article also presents a condition describing gives which model should be used, as well as a brief example of optimization schedule. The optimization results confirm the legitimacy of the work on the newly-developed models.

  13. No time for dead time: timing analysis of bright black hole binaries with NuSTAR

    DEFF Research Database (Denmark)

    Bachetti, Matteo; Harrison, Fiona A.; Cook, Rick

    2015-01-01

    Timing of high-count-rate sources with the NuSTAR Small Explorer Mission requires specialized analysis techniques. NuSTAR was primarily designed for spectroscopic observations of sources with relatively low count rates rather than for timing analysis of bright objects. The instrumental dead time ...... techniques. We apply this technique to NuSTAR observations of the black hole binaries GX 339-4, Cyg X-1, and GRS 1915+105....

  14. Correction for intrinsic and set dead-time losses in radioactivity counting

    International Nuclear Information System (INIS)

    Wyllie, H.A.

    1992-12-01

    Equations are derived for the determination of the intrinsic dead time of the components which precede the paralysis unit in a counting system for measuring radioactivity. The determination depends on the extension of the set dead time by the intrinsic dead time. Improved formulae are given for the dead-time correction of the count rate of a radioactive source in a single-channel system. A variable in the formulae is the intrinsic dead time which is determined concurrently with the counting of the source. The only extra equipment required in a conventional system is a scaler. 5 refs., 2 tabs., 21 figs

  15. NO TIME FOR DEAD TIME: TIMING ANALYSIS OF BRIGHT BLACK HOLE BINARIES WITH NuSTAR

    Energy Technology Data Exchange (ETDEWEB)

    Bachetti, Matteo; Barret, Didier [Université de Toulouse, UPS-OMP, IRAP, Toulouse F-31400 (France); Harrison, Fiona A.; Cook, Rick; Grefenstette, Brian W.; Fürst, Felix [Cahill Center for Astronomy and Astrophysics, Caltech, Pasadena, CA 91125 (United States); Tomsick, John; Boggs, Steven E.; Craig, William W. [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States); Schmid, Christian [Dr. Karl-Remeis-Sternwarte and ECAP, Sternwartstrasse 7, D-96049 Bamberg (Germany); Christensen, Finn E. [DTU Space, National Space Institute, Technical University of Denmark, Elektrovej 327, DK-2800 Lyngby (Denmark); Fabian, Andrew C.; Kara, Erin [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Gandhi, Poshak [Department of Physics, Durham University, South Road DH1 3LE (United Kingdom); Hailey, Charles J. [Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027 (United States); Maccarone, Thomas J. [Department of Physics, Texas Tech University, Lubbock, TX 79409 (United States); Miller, Jon M. [Department of Astronomy, University of Michigan, 500 Church Street, Ann Arbor, MI 48109-1042 (United States); Pottschmidt, Katja [CRESST, UMBC, and NASA GSFC, Code 661, Greenbelt, MD 20771 (United States); Stern, Daniel [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States); Uttley, Phil, E-mail: matteo.bachetti@irap.omp.eu [Anton Pannekoek Institute, University of Amsterdam, Science Park 904, 1098 XH Amsterdam (Netherlands); and others

    2015-02-20

    Timing of high-count-rate sources with the NuSTAR Small Explorer Mission requires specialized analysis techniques. NuSTAR was primarily designed for spectroscopic observations of sources with relatively low count rates rather than for timing analysis of bright objects. The instrumental dead time per event is relatively long (∼2.5 msec) and varies event-to-event by a few percent. The most obvious effect is a distortion of the white noise level in the power density spectrum (PDS) that cannot be easily modeled with standard techniques due to the variable nature of the dead time. In this paper, we show that it is possible to exploit the presence of two completely independent focal planes and use the cospectrum, the real part of the cross PDS, to obtain a good proxy of the white-noise-subtracted PDS. Thereafter, one can use a Monte Carlo approach to estimate the remaining effects of dead time, namely, a frequency-dependent modulation of the variance and a frequency-independent drop of the sensitivity to variability. In this way, most of the standard timing analysis can be performed, albeit with a sacrifice in signal-to-noise ratio relative to what would be achieved using more standard techniques. We apply this technique to NuSTAR observations of the black hole binaries GX 339–4, Cyg X-1, and GRS 1915+105.

  16. Digital instrumentation and dead-time processing for radionuclide metrology

    International Nuclear Information System (INIS)

    Censier, B.; Bobin, Ch.; Bouchard, J.

    2010-01-01

    Most of the acquisition chains used in radionuclide metrology are based on NIM modules. These analogue setups have been thoroughly tested for decades now, becoming a reference in the field. Nevertheless, the renewal of ageing modules and the need for extra features both call for the development of new acquisition schemes based on digital processing. In this article, several technologies usable for instrumentation are first presented. A review of past and present projects is made in the second part, highlighting the fundamental role of dead-time management. The last part is dedicated to the description of two digital systems developed at LNE-LNHB. The first one has been designed for the instrumentation of a NaI(Tl) well-type crystal set-up, while the second one is used for the management of three photomultipliers in the framework of the TDCR method and as a part of the development of a digital platform for coincidence counting. (authors)

  17. Minimally invasive estimation of ventricular dead space volume through use of Frank-Starling curves.

    Directory of Open Access Journals (Sweden)

    Shaun Davidson

    Full Text Available This paper develops a means of more easily and less invasively estimating ventricular dead space volume (Vd, an important, but difficult to measure physiological parameter. Vd represents a subject and condition dependent portion of measured ventricular volume that is not actively participating in ventricular function. It is employed in models based on the time varying elastance concept, which see widespread use in haemodynamic studies, and may have direct diagnostic use. The proposed method involves linear extrapolation of a Frank-Starling curve (stroke volume vs end-diastolic volume and its end-systolic equivalent (stroke volume vs end-systolic volume, developed across normal clinical procedures such as recruitment manoeuvres, to their point of intersection with the y-axis (where stroke volume is 0 to determine Vd. To demonstrate the broad applicability of the method, it was validated across a cohort of six sedated and anaesthetised male Pietrain pigs, encompassing a variety of cardiac states from healthy baseline behaviour to circulatory failure due to septic shock induced by endotoxin infusion. Linear extrapolation of the curves was supported by strong linear correlation coefficients of R = 0.78 and R = 0.80 average for pre- and post- endotoxin infusion respectively, as well as good agreement between the two linearly extrapolated y-intercepts (Vd for each subject (no more than 7.8% variation. Method validity was further supported by the physiologically reasonable Vd values produced, equivalent to 44.3-53.1% and 49.3-82.6% of baseline end-systolic volume before and after endotoxin infusion respectively. This method has the potential to allow Vd to be estimated without a particularly demanding, specialised protocol in an experimental environment. Further, due to the common use of both mechanical ventilation and recruitment manoeuvres in intensive care, this method, subject to the availability of multi-beat echocardiography, has the potential to

  18. Interconnection blocks with minimal dead volumes permitting planar interconnection to thin microfluidic devices

    DEFF Research Database (Denmark)

    Sabourin, David; Snakenborg, Detlef; Dufva, Martin

    2010-01-01

    We have previously described 'Interconnection Blocks' which are re-usable, non-integrated PDMS blocks which allowing multiple, aligned and planar microfluidic interconnections. Here, we describe Interconnection Block versions with zero dead volumes that allow fluidic interfacing to flat or thin s...

  19. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    Science.gov (United States)

    Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.

    2018-01-01

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.

  20. Non-Poisson counting statistics of a hybrid G-M counter dead time model

    International Nuclear Information System (INIS)

    Lee, Sang Hoon; Jae, Moosung; Gardner, Robin P.

    2007-01-01

    The counting statistics of a G-M counter with a considerable dead time event rate deviates from Poisson statistics. Important characteristics such as observed counting rates as a function true counting rates, variances and interval distributions were analyzed for three dead time models, non-paralyzable, paralyzable and hybrid, with the help of GMSIM, a Monte Carlo dead time effect simulator. The simulation results showed good agreements with the models in observed counting rates and variances. It was found through GMSIM simulations that the interval distribution for the hybrid model showed three distinctive regions, a complete cutoff region for the duration of the total dead time, a degraded exponential and an enhanced exponential regions. By measuring the cutoff and the duration of degraded exponential from the pulse interval distribution, it is possible to evaluate the two dead times in the hybrid model

  1. Instrumental dead-time and its relationship with matrix corrections in X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Thomas, I.L.; Haukka, M.T.; Anderson, D.H.

    1979-01-01

    The relationship between instrumental dead-time and the self-absorption coefficients, αsub(ii) in x.r.f. matrix correction by means of influence coefficients, is not generally recognized but has important analytical consequences. Systematic errors of the order of 1% (relative) for any analyte result from experimental uncertainties in instrumental dead-time. Such errors are applied unevenly across a given range of concentration because the error depends on the calibration standards and on the instrumental conditions used. Refinement of the instrumental dead-time value and other calibration parameters to conform with influence coefficients determined elsewhere assumes exact knowledge of dead-time of the instrument used originally, and quite similar excitation conditions and spectrometer geometry for the two instruments. Though these qualifications may not be met, adjustment of any of the parameters (dead-time, reference concentration, background concentration, self-absorption and other influence coefficients) can be easily achieved. (Auth.)

  2. Simple dead-time corrections for discrete time series of non-Poisson data

    International Nuclear Information System (INIS)

    Larsen, Michael L; Kostinski, Alexander B

    2009-01-01

    The problem of dead time (instrumental insensitivity to detectable events due to electronic or mechanical reset time) is considered. Most existing algorithms to correct for event count errors due to dead time implicitly rely on Poisson counting statistics of the underlying phenomena. However, when the events to be measured are clustered in time, the Poisson statistics assumption results in underestimating both the true event count and any statistics associated with count variability; the 'busiest' part of the signal is partially missed. Using the formalism associated with the pair-correlation function, we develop first-order correction expressions for the general case of arbitrary counting statistics. The results are verified through simulation of a realistic clustering scenario

  3. On Time with Minimal Expected Cost!

    DEFF Research Database (Denmark)

    David, Alexandre; Jensen, Peter Gjøl; Larsen, Kim Guldstrand

    2014-01-01

    (Priced) timed games are two-player quantitative games involving an environment assumed to be completely antogonistic. Classical analysis consists in the synthesis of strategies ensuring safety, time-bounded or cost-bounded reachability objectives. Assuming a randomized environment, the (priced......) timed game essentially defines an infinite-state Markov (reward) decision proces. In this setting the objective is classically to find a strategy that will minimize the expected reachability cost, but with no guarantees on worst-case behaviour. In this paper, we provide efficient methods for computing...... reachability strategies that will both ensure worst case time-bounds as well as provide (near-) minimal expected cost. Our method extends the synthesis algorithms of the synthesis tool Uppaal-Tiga with suitable adapted reinforcement learning techniques, that exhibits several orders of magnitude improvements w...

  4. Experimental dead time corrections for a linear position-sensitive proportional counter

    International Nuclear Information System (INIS)

    Yelon, W.B.; Tompson, C.W.; Mildner, D.F.R.; Berliner, R.; Missouri Univ., Columbia

    1984-01-01

    Two simple counters included in the charge-digitization circuitry of a position-sensitive proportional counter using the charge division method for position encoding have enabled us to determine the dead time losses for the system. An interesting positional dependence of the dead time tau is observed, which agrees with a simple model. The system enables us to correct the experimental data for dead time and to be indifferent to the relatively slow analog-to-digital converters used in the system. (orig.)

  5. Dynamic optimum dead time in piezoelectric transformer-based switch-mode power supplies

    DEFF Research Database (Denmark)

    Ekhtiari, Marzieh; Andersen, Thomas; Andersen, Michael A. E.

    2016-01-01

    to charge and discharge the input capacitance of piezoelectric transformers in order to achieve zero-voltage switching. This paper proposes a method for detecting the optimum dead time in piezoelectric transformer-based switch-mode power supplies. The provision of sufficient dead time in every cycle......Soft switching is required to attain high efficiency in high-frequency power converters. Piezoelectric transformerbased converters can benefit from soft switching in terms of significantly diminished switching losses and stresses. Adequate dead time is needed in order to deliver sufficient energy...

  6. Smith predictor-based multiple periodic disturbance compensation for long dead-time processes

    Science.gov (United States)

    Tan, Fang; Li, Han-Xiong; Shen, Ping

    2018-05-01

    Many disturbance rejection methods have been proposed for processes with dead-time, while these existing methods may not work well under multiple periodic disturbances. In this paper, a multiple periodic disturbance rejection is proposed under the Smith predictor configuration for processes with long dead-time. One feedback loop is added to compensate periodic disturbance while retaining the advantage of the Smith predictor. With information of the disturbance spectrum, the added feedback loop can remove multiple periodic disturbances effectively. The robust stability can be easily maintained through the rigorous analysis. Finally, simulation examples demonstrate the effectiveness and robustness of the proposed method for processes with long dead-time.

  7. Dead time effects from linear amplifiers and discriminators in single detector systems

    International Nuclear Information System (INIS)

    Funck, E.

    1986-01-01

    The dead-time losses originating from a linear amplifier combined with a discriminator for pulse-height selection are investigated. Measurements are carried out to determine the type of dead time represented by the amplifier-discriminator combination. The corrections involved by feeding the discriminator output pulses into an electronic module producing a blocking time are discussed and practical hints are given to reduce them. (orig.)

  8. MTR2: a discriminator and dead-time module used in counting systems

    International Nuclear Information System (INIS)

    Bouchard, J.

    2000-01-01

    In the field of radioactivity measurement, there is a constant need for highly specialized electronic modules such as ADCs, amplifiers, discriminators, dead-time modules, etc. But sometimes it is almost impossible to find on the market the modules having the performances corresponding to our needs. The purpose of the module presented here, called MTR2 (Module de Temps-mort Reconductible), is to process, in terms of pulse height discrimination and dead-time corrections, the pulses delivered by the detectors used in counting systems. This dead-time, of the extendible type, is triggered by both the positive and negative parts of the incoming pulse and the dead-time corrections are made according to the live-time method. This module, which has been developed and tested at LPRI, can be used alone in simple counting channels or in more complex systems such as coincidence systems. The philosophy governing the choice and the implementation of this type of dead-time as well as the system used for the dead-time corrections is presented. The electronic scheme and the performances are also presented. This module is available in the NIM standard

  9. Detector dead-time effects and paralyzability in high-speed quantum key distribution

    International Nuclear Information System (INIS)

    Rogers, Daniel J; Bienfang, Joshua C; Nakassis, Anastase; Xu Hai; Clark, Charles W

    2007-01-01

    Recent advances in quantum key distribution (QKD) have given rise to systems that operate at transmission periods significantly shorter than the dead times of their component single-photon detectors. As systems continue to increase in transmission rate, security concerns associated with detector dead times can limit the production rate of sifted bits. We present a model of high-speed QKD in this limit that identifies an optimum transmission rate for a system with given link loss and detector response characteristics

  10. Dead-time corrections on long-interval measurements of short-lived activities

    International Nuclear Information System (INIS)

    Irfan, M.

    1977-01-01

    A method has been proposed to make correction for counting losses due to dead time where the counting interval is comparable to or larger than the half-life of the activity under investigation. Counts due to background and any long-lived activity present in the source have been taken into consideration. The method is, under certain circumstances, capable of providing a valuable check on the accuracy of the dead time of the counting system. (Auth.)

  11. Correction of the counting up number by dead time in detector systems for radiograph images

    International Nuclear Information System (INIS)

    Cerdeira E, A.; Cicuttin, A.; Cerdeira, A.; Estrada, M.; Luca, A. de

    2002-01-01

    The effect of the dead time in a detection system by counting up of particles and the contribution of this error in the final image resolution is analysed. It is given a statistical criteria for the optimization of electronic parameters such as dead time and counting up memory which help in the implementation of these systems with the minimum necessary characteristics which satisfy the resolution requirements. (Author)

  12. Simple circuit for precise measurement of live dead or clock time in gamma-ray spectrometry

    International Nuclear Information System (INIS)

    Hammer, W.; Sterlinski, S.

    1976-01-01

    The basic design features and characteristics of circuit are described in the paper. The circuit coupled to a multichannel analyser (MCA) enables one of times: live(Tsub(iota)), dead (Tsub(d)) or clock(Tsub(c)) to be measured precisely. Second one is measured by a built-in timer of MCA. Having the Tsub(c)/Tsub(iota) ratio and utilizing suitable mathematical formulas one can make the corrections for both main effects (dead-time and pile-up) which yield counting losses in gamma-ray spectrometry at high and/or variable activities. Two examples of the dead-time and pile-up corrections by using the new circuit are presented in this paper. (author)

  13. Relationship between γ detection dead-time and count correction factor

    International Nuclear Information System (INIS)

    Wu Huailong; Zhang Jianhua; Chu Chengsheng; Hu Guangchun; Zhang Changfan; Hu Gen; Gong Jian; Tian Dongfeng

    2015-01-01

    The relationship between dead-time and count correction factor was investigated by using interference source for purpose of high γ activity measurement. The count rates maintain several 10 s"-"l with γ energy of 0.3-1.3 MeV for 10"4-10"5 Bq radioactive source. It is proved that the relationship between count loss and dead-time is unconcerned at various energy and various count intensities. The same correction formula can be used for any nuclide measurement. (authors)

  14. Dead wood biomass and turnover time, measured by radiocarbon, along a subalpine elevation gradient.

    Science.gov (United States)

    Kueppers, Lara M; Southon, John; Baer, Paul; Harte, John

    2004-12-01

    Dead wood biomass can be a substantial fraction of stored carbon in forest ecosystems, and coarse woody debris (CWD) decay rates may be sensitive to climate warming. We used an elevation gradient in Colorado Rocky Mountain subalpine forest to examine climate and species effects on dead wood biomass, and on CWD decay rate. Using a new radiocarbon approach, we determined that the turnover time of lodgepole pine CWD (340+/-130 years) was roughly half as long in a site with 2.5-3 degrees C warmer air temperature, as that of pine (630+/-400 years) or Engelmann spruce CWD (800+/-960 and 650+/-410 years) in cooler sites. Across all sites and both species, CWD age ranged from 2 to 600 years, and turnover time was 580+/-180 years. Total standing and fallen dead wood biomass ranged from 4.7+/-0.2 to 54+/-1 Mg ha(-1), and from 2.8 to 60% of aboveground live tree biomass. Dead wood biomass increased 75 kg ha(-1) per meter gain in elevation and decreased 13 Mg ha(-1) for every degree C increase in mean air temperature. Differences in biomass and decay rates along the elevation gradient suggest that climate warming will lead to a loss of dead wood carbon from subalpine forest.

  15. System Identification for Nonlinear FOPDT Model with Input-Dependent Dead-Time

    DEFF Research Database (Denmark)

    Sun, Zhen; Yang, Zhenyu

    2011-01-01

    An on-line iterative method of system identification for a kind of nonlinear FOPDT system is proposed in the paper. The considered nonlinear FOPDT model is an extension of the standard FOPDT model by means that its dead time depends on the input signal and the other parameters are time dependent....

  16. Corrections for the combined effects of decay and dead time in live-timed counting of short-lived radionuclides

    International Nuclear Information System (INIS)

    Fitzgerald, R.

    2016-01-01

    Studies and calibrations of short-lived radionuclides, for example "1"5O, are of particular interest in nuclear medicine. Yet counting experiments on such species are vulnerable to an error due to the combined effect of decay and dead time. Separate decay corrections and dead-time corrections do not account for this issue. Usually counting data are decay-corrected to the start time of the count period, or else instead of correcting the count rate, the mid-time of the measurement is used as the reference time. Correction factors are derived for both those methods, considering both extending and non-extending dead time. Series approximations are derived here and the accuracy of those approximations are discussed. - Highlights: • Derived combined effects of decay and dead time. • Derived for counting systems with extending or non-extending dead times. • Derived series expansions for both midpoint and decay-to-start-time methods. • Useful for counting experiments with short-lived radionuclides. • Examples given for "1"5O, used in PET scanning.

  17. Dead time and recovery time investigations on grinding plants with the aid of radioisotopes

    Energy Technology Data Exchange (ETDEWEB)

    Roetzer, H; Hagspiel, W

    1977-02-01

    With the aid of the radionuclides Mn 56 and Na 24 as tracers, respectively, the following characteristics were investigated for a roller mill and for a tandem air-swept grinding plant: the time of passage of the mill feed material, the retention time distribution in the grinding system (most frequent and mean retention time), and the time required for, respectively, 50%, 90% and 95% of the material to pass, in some instances separately for the three raw material components limestone, clay and sand. In addition, in the case of the air-swept grinding plant the time required for conveying the raw meal sample from the sampling point to the X-ray fluorescence analysis apparatus was determined. The mean retention time of the material in the two roller mills IV and V was 2.93 and 2.55 minutes respectively; for the tandem air-swept grinding plant it was 8 minutes. The time taken for 90% of the mill feed to pass through the mill was 5.5 and 5 minutes for the roller mills respectively and was about 18.3 minutes for the air-swept plant. The dead time vor conveying the raw meal sample to the X-ray fluorescence apparatus, including further grinding of the sample in a vibratory mill, was 12 minutes.

  18. A method for the determination of detector channel dead time for a neutron time-of-flight spectrometer

    International Nuclear Information System (INIS)

    Adib, M.; Salama, M.; Abd-Kawi, A.; Sadek, S.; Hamouda, I.

    1975-01-01

    A new method is developed to measure the dead time of a detector channel for a neutron time-of-flight spectrometer. The method is based on the simultaneous use of two identical BF 3 detectors but with two different efficiencies, due to their different enrichment in B 10 . The measurements were performed using the T.O.F. spectrometer installed at channel No. 6 of the ET-RR-1 reactor. The main contribution to the dead time was found to be due to the time analyser and the neutron detector used. The analyser dead time has been determined using a square wave pulse generator with frequency of 1 MC/S. For channel widths of 24.4 us, 48.8 ud and 97.6 us, the weighted dead times for statistical pulse distribution were found to be 3.25 us, 1.87 us respectively. The dead time of the detector contributes mostly to the counting losses and its value was found to be (33+-3) us

  19. Geiger-Mueller haloid counter dead time dependence on counting rate

    International Nuclear Information System (INIS)

    Onishchenko, A.M.; Tsvetkov, A.A.

    1980-01-01

    The experimental dependences of the dead time of Geiger counters (SBM-19, SBM-20, SBM-21 and SGM-19) on the loading, are presented. The method of two sources has been used to determine the dead time counters of increased stability. The counters are switched on according to the usually used circuit of discrete counting with loading resistance of 50 MOhm and the separating capacity of 10 pF. Voltage pulses are given to the counting device with the time of resolution of 100 ns, discrimenation threshold 3 V, input resistance 3.6 Ω and the input capacity-15 pF. The time constant of the counter RC-circuit is 50 μs

  20. A low dead time vernier delay line TDC implemented in an Actel flash-based FPGA

    International Nuclear Information System (INIS)

    Qin Xi; Feng Changqing; Zhang Deliang; Zhao Lei; Liu Shubin; An Qi

    2013-01-01

    In this paper, a high precision vernier delay line (VDL) TDC (Time-to-Digital Converter) in an Actel flash-based Field-Programmable-Gate-Arrays A3PE1500 is implemented, achieving a resolution of 16.4-ps root mean square value or 42-ps averaged bin size. The TDC has a dead time of about 200 ns while the dynamic range is 655.36 Vs. The double delay lines method is employed to cut the dead time in half to improve its performance. As the bin size of the TDC is dependent on temperature, a compensation algorithm is adopted as temperature drift correction, and the TDC shows satisfying performance in a temperature range from -5℃ to +55℃. (authors)

  1. Simple formulae for interpretation of the dead time α (first moment) method of reactor noise

    International Nuclear Information System (INIS)

    Degweker, S.B.

    1999-01-01

    The Markov Chain approach for solving problems related to the presence of a non extending dead time in a particle counting circuit with time correlated pulses was developed in an earlier paper. The formalism was applied to, among others, the dead time α (first moment) method of reactor noise. For this problem, however the solution obtained was largely numerical in character and had a tendency to break down for systems close to criticality. In the present paper, simple analytical expressions are derived for the count rate and L ex , the quantities of interest in this method. Comparisons with Monte Carlo simulations show that these formulae are accurate in the range of system parameters of practical interest

  2. Development of the software dead time methodology for the 4πβ-γ software coincidence system analysis program

    International Nuclear Information System (INIS)

    Toledo, Fabio de; Brancaccio, Franco; Dias, Mauro da Silva

    2009-01-01

    The Laboratorio de Metrologia Nuclear - LMN, Nuclear Metrology Laboratory -, at IPEN-CNEN/SP, Sao Paulo, Brazil, developed a new Software Coincidence System (SCS) for 4πβ-γ radioisotope standardization. SCS is composed by the data acquisition hardware, for the coincidence data recording, and the coincidence data analysis program that performs the radioactive activity calculation for the target sample. Due to hardware intrinsic signal sampling characteristics, multiple undesired data recording occurs from a single saturated pulse. Also pulse pileup leads to bad data recording. As the beta counting rates are much greater than the gamma ones, due to the high 4π geometry beta detecting efficiencies, the beta counting significantly increases because of multiple pulse recordings, resulting in a respective increasing in the calculated activity value. In order to minimize such bad recordings effect, a software dead time value was introduced in the coincidence analysis program, under development at LMN, discarding multiple recordings, due to pulse pileup or saturation. This work presents the methodology developed to determine the optimal software dead time data value, for better accuracy results attaining, and discusses the results, pointing to software improvement possibilities. (author)

  3. Sensorless Control Technology for PMSG base on the Dead-time Compensation voltage

    Directory of Open Access Journals (Sweden)

    Yang Li-yong

    2015-01-01

    Full Text Available In order to improve the speed sensorless-control system of PMSG in low speed performance, this paper introduces a novel Dead-time compensation control method .Mathematical model is established according to the Dead-zone of the influence of the voltage source type inverter output voltage. At the same time, the given value of current regulator output voltage has been fixed based on the established model. Then the stator voltage after compensationed is applied to the flux estimation, which improves the performance of flux estimation. Finally, the position and speed of the rotor is estimated based on Back-Electromotive Force, which has Simple algorithm and good robustness. In order to verify the correctness of theoretical analysis, the experiment was done according to the new control method. The results proved the correctness and feasibility of this control method.

  4. Adaptive Neural Tracking Control for Discrete-Time Switched Nonlinear Systems with Dead Zone Inputs

    Directory of Open Access Journals (Sweden)

    Jidong Wang

    2017-01-01

    Full Text Available In this paper, the adaptive neural controllers of subsystems are proposed for a class of discrete-time switched nonlinear systems with dead zone inputs under arbitrary switching signals. Due to the complicated framework of the discrete-time switched nonlinear systems and the existence of the dead zone, it brings about difficulties for controlling such a class of systems. In addition, the radial basis function neural networks are employed to approximate the unknown terms of each subsystem. Switched update laws are designed while the parameter estimation is invariable until its corresponding subsystem is active. Then, the closed-loop system is stable and all the signals are bounded. Finally, to illustrate the effectiveness of the proposed method, an example is employed.

  5. Minimal Time Problem with Impulsive Controls

    Energy Technology Data Exchange (ETDEWEB)

    Kunisch, Karl, E-mail: karl.kunisch@uni-graz.at [University of Graz, Institute for Mathematics and Scientific Computing (Austria); Rao, Zhiping, E-mail: zhiping.rao@ricam.oeaw.ac.at [Austrian Academy of Sciences, Radon Institute of Computational and Applied Mathematics (Austria)

    2017-02-15

    Time optimal control problems for systems with impulsive controls are investigated. Sufficient conditions for the existence of time optimal controls are given. A dynamical programming principle is derived and Lipschitz continuity of an appropriately defined value functional is established. The value functional satisfies a Hamilton–Jacobi–Bellman equation in the viscosity sense. A numerical example for a rider-swing system is presented and it is shown that the reachable set is enlargered by allowing for impulsive controls, when compared to nonimpulsive controls.

  6. Minimizing guard ring dead space in silicon detectors with an n-type guard ring at the edge of the detector

    International Nuclear Information System (INIS)

    Palviainen, Tanja; Tuuva, Tuure; Leinonen, Kari

    2007-01-01

    Detectors with n-type silicon with an n + -type guard ring were investigated. In the present work, a new p + /n/n + detector structure with an n + guard ring is described. The guard ring is placed at the edge of the detector. The detector depletion region extends also sideways, allowing for signal collection very close to the n-guard ring. In this kind of detector structure, the dead space of the detector is minimized to be only below the guard ring. This is proved by simulations done using Silvaco/ATLAS software

  7. Minimizing guard ring dead space in silicon detectors with an n-type guard ring at the edge of the detector

    Energy Technology Data Exchange (ETDEWEB)

    Palviainen, Tanja [Lappeenranta University of Technology, P.O. Box 20, FIN-53851 Lappeenranta (Finland)]. E-mail: tanja.palviainen@lut.fi; Tuuva, Tuure [Lappeenranta University of Technology, P.O. Box 20, FIN-53851 Lappeenranta (Finland); Leinonen, Kari [Lappeenranta University of Technology, P.O. Box 20, FIN-53851 Lappeenranta (Finland)

    2007-04-01

    Detectors with n-type silicon with an n{sup +}-type guard ring were investigated. In the present work, a new p{sup +}/n/n{sup +} detector structure with an n{sup +} guard ring is described. The guard ring is placed at the edge of the detector. The detector depletion region extends also sideways, allowing for signal collection very close to the n-guard ring. In this kind of detector structure, the dead space of the detector is minimized to be only below the guard ring. This is proved by simulations done using Silvaco/ATLAS software.

  8. Improving the counting efficiency in time-correlated single photon counting experiments by dead-time optimization

    Energy Technology Data Exchange (ETDEWEB)

    Peronio, P.; Acconcia, G.; Rech, I.; Ghioni, M. [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)

    2015-11-15

    Time-Correlated Single Photon Counting (TCSPC) has been long recognized as the most sensitive method for fluorescence lifetime measurements, but often requiring “long” data acquisition times. This drawback is related to the limited counting capability of the TCSPC technique, due to pile-up and counting loss effects. In recent years, multi-module TCSPC systems have been introduced to overcome this issue. Splitting the light into several detectors connected to independent TCSPC modules proportionally increases the counting capability. Of course, multi-module operation also increases the system cost and can cause space and power supply problems. In this paper, we propose an alternative approach based on a new detector and processing electronics designed to reduce the overall system dead time, thus enabling efficient photon collection at high excitation rate. We present a fast active quenching circuit for single-photon avalanche diodes which features a minimum dead time of 12.4 ns. We also introduce a new Time-to-Amplitude Converter (TAC) able to attain extra-short dead time thanks to the combination of a scalable array of monolithically integrated TACs and a sequential router. The fast TAC (F-TAC) makes it possible to operate the system towards the upper limit of detector count rate capability (∼80 Mcps) with reduced pile-up losses, addressing one of the historic criticisms of TCSPC. Preliminary measurements on the F-TAC are presented and discussed.

  9. Absolute dose calibration of an X-ray system and dead time investigations of photon-counting techniques

    CERN Document Server

    Carpentieri, C; Ludwig, J; Ashfaq, A; Fiederle, M

    2002-01-01

    High precision concerning the dose calibration of X-ray sources is required when counting and integrating methods are compared. The dose calibration for a dental X-ray tube was executed with special dose calibration equipment (dosimeter) as function of exposure time and rate. Results were compared with a benchmark spectrum and agree within +-1.5%. Dead time investigations with the Medipix1 photon-counting chip (PCC) have been performed by rate variations. Two different types of dead time, paralysable and non-paralysable will be discussed. The dead time depends on settings of the front-end electronics and is a function of signal height, which might lead to systematic defects of systems. Dead time losses in excess of 30% have been found for the PCC at 200 kHz absorbed photons per pixel.

  10. Analysis and Compensation of Dead-Time Effect of a ZVT PWM Inverter Considering the Rise- and Fall-Times

    Directory of Open Access Journals (Sweden)

    Hailin Zhang

    2016-11-01

    Full Text Available The dead-time effect, as an intrinsic problem of the converters based on the half-bridge unit, leads to distortions in the converter output. Although several dead-time effect compensation or elimination methods have been proposed, they cannot fully remove the dead-time effect of blanking delay error, because the output current polarity is difficult detect accurately. This paper utilizes the zero-voltage-switching (ZVT technique to eliminate the blanking delay error, which is the main drawback of the hard-switching inverter, although the technique initially aims to improve the efficiency. A typical ZVT inverter—the auxiliary resonant snubber inverter (ARSI is analyzed. The blanking delay error is completely eliminated in the ARSI. Another error source caused by the finite rise- and fall-times of the voltage is analyzed, which was not considered in the hard-switching inverter. A compensation method based on the voltage error estimation is proposed to compensate the rise- and fall-error. A prototype was developed to verify the effectiveness of the proposed control. Both the simulation and experimental results demonstrate that the qualities of the output current and voltage in the ARSI are better than that in the hard-switching inverter due to the elimination of the blanking delay error. The total harmonic distortion (THD of the output is further reduced by using the proposed compensation method in the ARSI.

  11. Minimization of Retrieval Time During Software Reuse | Salami ...

    African Journals Online (AJOL)

    Minimization of Retrieval Time During Software Reuse. ... Retrieval of relevant software from the repository during software reuse can be time consuming if the repository contains many ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  12. Dead time of different neutron detectors associated with a pulsed electronics with current collection

    International Nuclear Information System (INIS)

    Bacconnet, Eugene; Duchene, Jean; Duquesne, Henry; Schmitt, Andre

    1968-01-01

    After having outlined that the development of fast neutron reactor physics, notably kinetics, requires highly efficient neutron detectors and pulse measurement chains able to cope with high counting rates, the authors report the measurement of dead time of various neutron detectors which are used in the experimental study of fast neutron reactors. They present the SAITB 1 electronic measurement set, its components, its general characteristics, the protected connection between the detector and the electronics. They present and report the experiment: generalities about detector location and measurements, studied detectors (fission chambers, boron counters), and report the exploitation of the obtained results (principle, data, high-threshold counting gain) [fr

  13. Variable dead time counters. 1 - theoretical responses and the effects of neutron multiplication

    International Nuclear Information System (INIS)

    Lees, E.W.; Hooton, B.W.

    1978-10-01

    A theoretical expression is derived for calculating the response of any variable dead time counter (VDC) used in the passive assay of plutonium by neutron counting of the natural spontaneous fission activity. The effects of neutron multiplication in the sample arising from interactions of the original spontaneous fission neutrons is shown to modify the linear relationship between VDC signal and Pu mass. Numerical examples are shown for the Euratom VDC and a systematic investigation of the various factors affecting neutron multiplication is reported. Limited comparisons between the calculations and experimental data indicate provisional validity of the calculations. (author)

  14. A nondispersive X-ray spectrometer with dead time correction of great accuracy

    International Nuclear Information System (INIS)

    Guillon, H.; Friant, A.

    1976-01-01

    Processing the analog signals from an energy dispersive X-ray spectrometer requires a great number of functions to be assembled. Instead of using function modules, it was decided to build a unit intended for working out digital-input data to the mini-computer, from the signals delivered by the Si(Li) detector. The unit contains six cards intended for the following functions: main amplifier, stabilizer of the threshold level and pile-up detector, amplitude encoder, pulse generator and fast amplifier, chronometer with dead time correction and high voltage polarization [fr

  15. Active cancellation - A means to zero dead-time pulse EPR.

    Science.gov (United States)

    Franck, John M; Barnes, Ryan P; Keller, Timothy J; Kaufmann, Thomas; Han, Songi

    2015-12-01

    The necessary resonator employed in pulse electron paramagnetic resonance (EPR) rings after the excitation pulse and creates a finite detector dead-time that ultimately prevents the detection of signal from fast relaxing spin systems, hindering the application of pulse EPR to room temperature measurements of interesting chemical or biological systems. We employ a recently available high bandwidth arbitrary waveform generator (AWG) to produce a cancellation pulse that precisely destructively interferes with the resonant cavity ring-down. We find that we can faithfully detect EPR signal at all times immediately after, as well as during, the excitation pulse. This is a proof of concept study showcasing the capability of AWG pulses to precisely cancel out the resonator ring-down, and allow for the detection of EPR signal during the pulse itself, as well as the dead-time of the resonator. However, the applicability of this approach to conventional EPR experiments is not immediate, as it hinges on either (1) the availability of low-noise microwave sources and amplifiers to produce the necessary power for pulse EPR experiment or (2) the availability of very high conversion factor micro coil resonators that allow for pulse EPR experiments at modest microwave power. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. A Unified Approach to Adaptive Neural Control for Nonlinear Discrete-Time Systems With Nonlinear Dead-Zone Input.

    Science.gov (United States)

    Liu, Yan-Jun; Gao, Ying; Tong, Shaocheng; Chen, C L Philip

    2016-01-01

    In this paper, an effective adaptive control approach is constructed to stabilize a class of nonlinear discrete-time systems, which contain unknown functions, unknown dead-zone input, and unknown control direction. Different from linear dead zone, the dead zone, in this paper, is a kind of nonlinear dead zone. To overcome the noncausal problem, which leads to the control scheme infeasible, the systems can be transformed into a m -step-ahead predictor. Due to nonlinear dead-zone appearance, the transformed predictor still contains the nonaffine function. In addition, it is assumed that the gain function of dead-zone input and the control direction are unknown. These conditions bring about the difficulties and the complicacy in the controller design. Thus, the implicit function theorem is applied to deal with nonaffine dead-zone appearance, the problem caused by the unknown control direction can be resolved through applying the discrete Nussbaum gain, and the neural networks are used to approximate the unknown function. Based on the Lyapunov theory, all the signals of the resulting closed-loop system are proved to be semiglobal uniformly ultimately bounded. Moreover, the tracking error is proved to be regulated to a small neighborhood around zero. The feasibility of the proposed approach is demonstrated by a simulation example.

  17. Dead-time free pixel readout architecture for ATLAS front-end IC

    CERN Document Server

    Einsweiler, Kevin F; Kleinfelder, S A; Luo, L; Marchesini, R; Milgrome, O; Pengg, F X

    1999-01-01

    A low power sparse scan readout architecture has been developed for the ATLAS pixel front-end IC. The architecture supports a dual discriminator and extracts the time over threshold (TOT) information along with a 2-D spatial address $9 of the hits associating them with a unique 7-bit beam crossing number. The IC implements level-1 trigger filtering along with event building (grouping together all hits in a beam crossing) in the end of column (EOC) buffer. The $9 events are transmitted over a 40 MHz serial data link with the protocol supporting buffer overflow handling by appending error flags to events. This mixed-mode full custom IC is implemented in 0.8 mu HP process to meet the $9 requirements for the pixel readout in the ATLAS inner detector. The circuits have been tested and the IC provides dead-time-less ambiguity free readout at 40 MHz data rate.

  18. Minimalism

    CERN Document Server

    Obendorf, Hartmut

    2009-01-01

    The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.

  19. Radiocardiography of minimal transit times: a useful diagnostic procedure

    International Nuclear Information System (INIS)

    Schicha, H.; Vyska, K.; Becker, V.; Feinendegen, L.E.; Duesseldorf Univ., F.R. Germany)

    1975-01-01

    Contrary to mean transit times, minimal transit times are the differences between arrival times of an indicator. Arrival times in various cardiac compartments can be easily measured with radioisotopes and fast gamma cameras permitting data processing. This paper summarizes data selected from more than 1500 measurements made so far on normal individuals and patients with valvular heart disease, myocardial insufficiency, digitalis effect, atrial fibrillation, hypothyroidism, hyperthyroidism, effort-syndrome and coronary artery disease. (author)

  20. Design and analysis of modified Smith predictors for self-regulating and non-self regulating processes with dead time

    CERN Document Server

    Saravanakumar, G; Nayak, C G

    2007-01-01

    A modification of Smith predictor for controlling the higher order processes with integral action ad long dead-time is proposed in this paper. The controller used in this Smith predictor is an Integral-Proportional Derivative controller, where the Integrator is in the forward path and the Proportional and Derivative control are in the feedback, acting on the feedback signal. The main objective of this paper is to design a Dead Time Compensator(DTC), which has minimum tuning parameters, simple controller tuning, robust performance of tuning formulae and to obtain a critically damped system which is as fast as possible in its setpoint and load disturbance rejection performance. The controller in this paper is tuned by an adaptive method. This paper also presents a survey of various dead time compensators and their performance analysis.

  1. Investigation of the behaviour of both dead time and observed counting rates of He-3 gas filled neutron detector

    Energy Technology Data Exchange (ETDEWEB)

    Adib, M.; Eid, Y.; Abdel Kawy, A.; Maayouf, R.M.A.; Shuriet, G.M.; Hamouda, I.

    1981-01-01

    The behaviour of the dead time of He-3 detector, operating at both the proportional and the corona discharge regions, is investigated as a function of the neutron reaction rate inside the detector. The applied experimental method makes use of the fluctuations, due to the detector dead time in the observed counting rates from Poisson's distribution. In order to check the validity of the experimental method used in the present work, the dead time of BF/sub 3/ neutron detectors with different efficiencies (due to different enrichement in B-10) were determined. It is shown that the observed counting rate from the He-3 detector operating at the proportional region for neutron reaction rates ranging from 8 x 10/sup 3/ to 2.5 x 10/sup 4/ reaction/sec decreases with the increase of the neutron reaction rate. Such behaviour was not observed when operating the He-3 detector at the corona discharge region.

  2. Investigation of the behaviour of both dead time and observed counting rates of He-3 gas filled neutron detector

    International Nuclear Information System (INIS)

    Adib, M.; Eid, Y.; Abdel Kawy, A.; Maayouf, R.M.A.; Shuriet, G.M.; Hamouda, I.

    1981-01-01

    The behaviour of the dead time of He-3 detector, operating at both the proportional and the corona discharge regions, is investigated as a function of the neutron reaction rate inside the detector. The applied experimental method makes use of the fluctuations, due to the detector dead time in the observed counting rates from Poisson's distribution. In order to check the validity of the experimental method used in the present work, the dead time of BF 3 neutron detectors with different efficiencies (due to different enrichement in B-10) were determined. It is shown that the observed counting rate from the He-3 detector operating at the proportional region for neutron reaction rates ranging from 8 x 10 3 to 2.5 x 10 4 reaction/sec decreases with the increase of the neutron reaction rate. Such behaviour was not observed when operating the He-3 detector at the corona discharge region. (orig.) [de

  3. Discrimination of shot-noise-driven Poisson processes by external dead time - Application of radioluminescence from glass

    Science.gov (United States)

    Saleh, B. E. A.; Tavolacci, J. T.; Teich, M. C.

    1981-01-01

    Ways in which dead time can be used to constructively enhance or diminish the effects of point processes that display bunching in the shot-noise-driven doubly stochastic Poisson point process (SNDP) are discussed. Interrelations between photocount bunching arising in the SNDP and the antibunching character arising from dead-time effects are investigated. It is demonstrated that the dead-time-modified count mean and variance for an arbitrary doubly stochastic Poisson point process can be obtained from the Laplace transform of the single-fold and joint-moment-generating functions for the driving rate process. The theory is in good agreement with experimental values for radioluminescence radiation in fused silica, quartz, and glass, and the process has many applications in pulse, particle, and photon detection.

  4. Analysis and Mitigation of Dead Time Harmonics in the Single-Phase Full-Bridge PWM Converters with Repetitive Controllers

    DEFF Research Database (Denmark)

    Yang, Yongheng; Zhou, Keliang; Wang, Huai

    2018-01-01

    In order to prevent the power switching devices (e.g., the Insulated-Gate-Bipolar-Transistor, IGBT) from shoot-through in voltage source converters during a switching period, the dead time is added either in the hardware driver circuits of the IGBTs or implemented in software in Pulse-Width Modul......In order to prevent the power switching devices (e.g., the Insulated-Gate-Bipolar-Transistor, IGBT) from shoot-through in voltage source converters during a switching period, the dead time is added either in the hardware driver circuits of the IGBTs or implemented in software in Pulse...

  5. Free time minimizers for the three-body problem

    Science.gov (United States)

    Moeckel, Richard; Montgomery, Richard; Sánchez Morgado, Héctor

    2018-03-01

    Free time minimizers of the action (called "semi-static" solutions by Mañe in International congress on dynamical systems in Montevideo (a tribute to Ricardo Mañé), vol 362, pp 120-131, 1996) play a central role in the theory of weak KAM solutions to the Hamilton-Jacobi equation (Fathi in Weak KAM Theorem in Lagrangian Dynamics Preliminary Version Number 10, 2017). We prove that any solution to Newton's three-body problem which is asymptotic to Lagrange's parabolic homothetic solution is eventually a free time minimizer. Conversely, we prove that every free time minimizer tends to Lagrange's solution, provided the mass ratios lie in a certain large open set of mass ratios. We were inspired by the work of Da Luz and Maderna (Math Proc Camb Philos Soc 156:209-227, 1980) which showed that every free time minimizer for the N-body problem is parabolic and therefore must be asymptotic to the set of central configurations. We exclude being asymptotic to Euler's central configurations by a second variation argument. Central configurations correspond to rest points for the McGehee blown-up dynamics. The large open set of mass ratios are those for which the linearized dynamics at each Euler rest point has a complex eigenvalue.

  6. Switching Device Dead Time Optimization of Resonant Double-Sided LCC Wireless Charging System for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Xi Zhang

    2017-11-01

    Full Text Available Aiming at the reduction of the influence of the dead time setting on power level and efficiency of the inverter of double-sided LCC resonant wireless power transfer (WPT system, a dead time soft switching optimization method for metal–oxide–semiconductor field-effect transistor (MOSFET is proposed. At first, the mathematic description of double-sided LCC resonant wireless charging system is established, and the operating mode is analyzed as well, deducing the quantitative characteristic that the secondary side compensation capacitor C2 can be adjusted to ensure that the circuit is inductive. A dead time optimization design method is proposed, contributing to achieving zero-voltage switching (ZVS of the inverter, which is closely related to the performance of the WPT system. In the end, a prototype is built. The experimental results verify that dead time calculated by this optimized method can ensure the soft switching of the inverter MOSFET and promote the power and efficiency of the WPT.

  7. Correction of dynamic time-activity curves for gamma-camera dead time, radiotracer delivery, and radioactive decay: special considerations with ultrashort-lived radioisotopes

    International Nuclear Information System (INIS)

    Kuruc, A.; Zimmerman, R.E.; Treves, S.

    1985-01-01

    Time-vs.-activity curves obtained by using ultrashort-lived radioisotopes often need to be corrected for the effects of gamma-camera dead time and physical decay. Count loss due to gamma-camera dead time can be monitored by using an electronic oscillator incorporated into the gamma camera. Two algorithms that use this information to correct time-activity curves are discussed. It is also shown that the effect of physical decay on a time-activity curve is dependent on the time course of delivery of the radioisotope to the organ of interest. A mathematical technique that corrects physical decay is described

  8. G.M. counter and pre-determined dead time; Compteur G.M. et temps mort impose

    Energy Technology Data Exchange (ETDEWEB)

    Lamotte, R; Le Baud, P [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires

    1959-07-01

    This paper is divided into two main parts. - The first section recalls the principle on which a G.M. counter works, and examines the factors which lead to inaccuracies in counting. The concept of dead time, although simple risen associated with the counter alone, becomes complicated as soon as an electronic dead time is introduced to meet the demands of a measurement or an experiment. The resulting dead time, due to the coexistence of these dead times created by a single motivating factor, shows up as a function of certain laws of probability. From the analysis of the various cases of possible combinations, the conditions which must be fulfilled by a system with pre-determined dead time may be determined. This leads to a method for measuring the dead time of a G.M. counter, and the possibility of studying the latter under the utilisation conditions foreseen. - In the second part the principle, construction and characteristics of two systems with pre-determined dead time are discussed. To conclude, a comparison of several experimental results justifies an extension of the possibilities of a G.M. counter used in conjunction with such a system. (author) [French] Deux parties essentielles scindent cet expose. - La premiere partie rappelle le principe de fonctionnement d'un compteur G.M. et examine les facteurs d'imprecisions affectant les comptages. La notion de temps mort, simple quand elle est associee au compteur seul, se complique des qu'intervient un temps mort electronique introduit pour les besoins d'une mesure ou d'une experience. Le temps mort resultant, du a la coexistence de ces temps morts engendres par une meme cause, se manifeste en fonction de certaines lois de probabilites. L'analyse des differents cas de combinaisons possibles permet de preciser les imperatifs auxquels doit repondre un systeme a temps mort impose. Il en decoule une methode de mesure du temps mort d'un compteur G.M. et la possibilite d'etudier celui-ci dans les conditions d

  9. SU-E-I-88: The Effect of System Dead Time On Real-Time Plastic and GOS Based Fiber-Optic Dosimetry Systems

    Energy Technology Data Exchange (ETDEWEB)

    Hoerner, M; Hintenlang, D [Univ Florida, Gainesville, FL (United States)

    2015-06-15

    Purpose: A methodology is presented to correct for measurement inaccuracies at high detector count rates using a plastic and GOS scintillation fibers coupled to a photomultiplier tube with digital readout. This system allows temporal acquisition and manipulation of measured data. Methods: The detection system used was a plastic scintillator and a separate gadolinium scintillator, both (0.5 diameter) coupled to an optical fiber with a Hamamatsu photon counter with a built-in microcontroller and digital interface. Count rate performance of the system was evaluated using the nonparalzable detector model. Detector response was investigated across multiple radiation sources including: orthovoltage x-ray system, colbat-60 gamma rays, proton therapy beam, and a diagnostic radiography x-ray tube. The dead time parameter was calculated by measuring the count rate of the system at different exposure rates using a reference detector. Results: The system dead time was evaluated for the following sources of radiation used clinically: diagnostic energy x-rays, cobalt-60 gamma rays, orthovoltage xrays, particle proton accelerator, and megavoltage x-rays. It was found that dead time increased significantly when exposing the detector to sources capable of generating Cerenkov radiation, all of the sources sans the diagnostic x-rays, with increasing prominence at higher photon energies. Percent depth dose curves generated by a dedicated ionization chamber and compared to the detection system demonstrated that correcting for dead time improves accuracy. On most sources, nonparalzable model fit provided an improved system response. Conclusion: Overall, the system dead time was variable across the investigated radiation particles and energies. It was demonstrated that the system response accuracy was greatly improved by correcting for dead time effects. Cerenkov radiation plays a significant role in the increase in the system dead time through transient absorption effects attributed to

  10. A 65 nm CMOS analog processor with zero dead time for future pixel detectors

    Energy Technology Data Exchange (ETDEWEB)

    Gaioni, L., E-mail: luigi.gaioni@unibg.it [Università di Bergamo, I-24044 Dalmine (Italy); INFN, Sezione di Pavia, I-27100 Pavia (Italy); Braga, D.; Christian, D.C.; Deptuch, G.; Fahim, F. [Fermi National Accelerator Laboratory, Batavia IL (United States); Nodari, B. [Università di Bergamo, I-24044 Dalmine (Italy); INFN, Sezione di Pavia, I-27100 Pavia (Italy); Centre National de Recherche Scientifique, APC/IN2P3, Paris (France); Ratti, L. [Università di Pavia, I-27100 Pavia (Italy); INFN, Sezione di Pavia, I-27100 Pavia (Italy); Re, V. [Università di Bergamo, I-24044 Dalmine (Italy); INFN, Sezione di Pavia, I-27100 Pavia (Italy); Zimmerman, T. [Fermi National Accelerator Laboratory, Batavia IL (United States)

    2017-02-11

    Next generation pixel chips at the High-Luminosity (HL) LHC will be exposed to extremely high levels of radiation and particle rates. In the so-called Phase II upgrade, ATLAS and CMS will need a completely new tracker detector, complying with the very demanding operating conditions and the delivered luminosity (up to 5×10{sup 34} cm{sup −2} s{sup −1} in the next decade). This work is concerned with the design of a synchronous analog processor with zero dead time developed in a 65 nm CMOS technology, conceived for pixel detectors at the HL-LHC experiment upgrades. It includes a low noise, fast charge sensitive amplifier featuring a detector leakage compensation circuit, and a compact, single ended comparator that guarantees very good performance in terms of channel-to-channel dispersion of threshold without needing any pixel-level trimming. A flash ADC is exploited for digital conversion immediately after the charge amplifier. A thorough discussion on the design of the charge amplifier and the comparator is provided along with an exhaustive set of simulation results.

  11. Some relations between asymptotic results for dead-time-distorted processes. Part I. The expectation values

    International Nuclear Information System (INIS)

    Mueller, J.W.

    1975-01-01

    The purpose of the present study is twofold. On the one hand, it should provide us with a deeper insight into the mechanism of these processes. On the other hand, we shall arrive at some new forms of asymptotic results not commonly known, in particular those pertaining to an extended dead time. In addition, the novel approach permits independent checking of earlier results (some of which had been at variance with previous claims). In view of the usually quite cumbersome arithmetic involved, such controls are certainly most welcome. In this first part all the relations concerning the asymptotic expectation values will be discussed; the second part will do the same for the variances. A more elegant treatment of these problems, based on some general asymptotic results for renewal processes of the type first derived by Smith must be postponed for the moment since the corresponding formulae for a modified process are not yet readily available. We hope to be able to fill this gap in a near future

  12. Adaptive NN tracking control of uncertain nonlinear discrete-time systems with nonaffine dead-zone input.

    Science.gov (United States)

    Liu, Yan-Jun; Tong, Shaocheng

    2015-03-01

    In the paper, an adaptive tracking control design is studied for a class of nonlinear discrete-time systems with dead-zone input. The considered systems are of the nonaffine pure-feedback form and the dead-zone input appears nonlinearly in the systems. The contributions of the paper are that: 1) it is for the first time to investigate the control problem for this class of discrete-time systems with dead-zone; 2) there are major difficulties for stabilizing such systems and in order to overcome the difficulties, the systems are transformed into an n-step-ahead predictor but nonaffine function is still existent; and 3) an adaptive compensative term is constructed to compensate for the parameters of the dead-zone. The neural networks are used to approximate the unknown functions in the transformed systems. Based on the Lyapunov theory, it is proven that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded and the tracking error converges to a small neighborhood of zero. Two simulation examples are provided to verify the effectiveness of the control approach in the paper.

  13. Two-degree-of-freedom fractional order-PID controllers design for fractional order processes with dead-time.

    Science.gov (United States)

    Li, Mingjie; Zhou, Ping; Zhao, Zhicheng; Zhang, Jinggang

    2016-03-01

    Recently, fractional order (FO) processes with dead-time have attracted more and more attention of many researchers in control field, but FO-PID controllers design techniques available for the FO processes with dead-time suffer from lack of direct systematic approaches. In this paper, a simple design and parameters tuning approach of two-degree-of-freedom (2-DOF) FO-PID controller based on internal model control (IMC) is proposed for FO processes with dead-time, conventional one-degree-of-freedom control exhibited the shortcoming of coupling of robustness and dynamic response performance. 2-DOF control can overcome the above weakness which means it realizes decoupling of robustness and dynamic performance from each other. The adjustable parameter η2 of FO-PID controller is directly related to the robustness of closed-loop system, and the analytical expression is given between the maximum sensitivity specification Ms and parameters η2. In addition, according to the dynamic performance requirement of the practical system, the parameters η1 can also be selected easily. By approximating the dead-time term of the process model with the first-order Padé or Taylor series, the expressions for 2-DOF FO-PID controller parameters are derived for three classes of FO processes with dead-time. Moreover, compared with other methods, the proposed method is simple and easy to implement. Finally, the simulation results are given to illustrate the effectiveness of this method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Effects of injection timing, before and after top dead center on the propulsion and power in a diesel engine

    Directory of Open Access Journals (Sweden)

    Nader Raeie

    2014-06-01

    Full Text Available It is well known that injection strategies including the injection timing and pressure play the most important role in determining engine performance, especially in pollutant emissions. However, the injection timing and pressure quantitatively affect the performance of diesel engine with a turbo charger are not well understood. In this paper, the fire computational fluid dynamics (CFD code with an improved spray model has been used to simulate the spray and combustion processes of diesel with early and late injection timings and six different injection pressure (from 275 bar to 1000 bar. It has been concluded that the use of early injection provides lower soot and higher NOx emissions than the late injection. In this study, it has been tried using the change of fuel injection time at these two next steps: before top dead center (BTDC and after top dead center (ATDC in order to achieving optimum emission and power in a specific point.

  15. Effective sensitivity in 3D PET: The impact of detector dead time on 3D system performance

    International Nuclear Information System (INIS)

    Bailey, D.L.; Jones, T.; Meikle, S.R.

    1996-01-01

    3D PET has higher sensitivity than 2D PET. Sensitivity is determined by two components: the geometric solid angle for detection, and the fractional dead time, i.e., the time for which the detector is unavailable for accepting events. The loss in overall sensitivity as a function of radioactivity concentration due to these factors for 3D PET has been characterized by a parameter, the effective sensitivity, which combines absolute sensitivity and noise equivalent count rates. This parameter includes scatter, system sensitivity, dead time, and random coincidence rates, and permits comparisons between different tomographs as well as the same tomograph under different conditions. Effective sensitivity decreases most rapidly for larger, open 3D tomographs. The loss in effective sensitivity with increasing count rate suggests that new faster scintillation detectors will be needed to realize the sensitivity gain of 3D PET over a wide dynamic range of radioactivity concentrations

  16. Computerprogram for the determination of minimal cardiac transit times

    International Nuclear Information System (INIS)

    Bosiljanoff, P.; Herzog, H.; Schmid, A.; Sommer, D.; Vyska, K.; Feinendegen, L.E.

    1982-10-01

    An Anger-Type gamma-camera is used to register the first pass of a radioactive tracer of blood flow through the heart. The acquired data are processed by a suitable computer program yielding time-activity curves for sequential heart segments, which are selected by the region of interest technique. The program prints the minimal cardiac transit times, in terms of total transit times, as well as segmental transit times for the right atrium, right ventricle, lung, left atrium and left ventricle. The measured values are normalized to a rate of 80/min and are compared to normal mean values. The deviation from the normal mean values is characterized by a coefficient F. Moreover, these findings are qualitatively rated. (orig./MG)

  17. Affordable CZT SPECT with dose-time minimization (Conference Presentation)

    Science.gov (United States)

    Hugg, James W.; Harris, Brian W.; Radley, Ian

    2017-03-01

    PURPOSE Pixelated CdZnTe (CZT) detector arrays are used in molecular imaging applications that can enable precision medicine, including small-animal SPECT, cardiac SPECT, molecular breast imaging (MBI), and general purpose SPECT. The interplay of gamma camera, collimator, gantry motion, and image reconstruction determines image quality and dose-time-FOV tradeoffs. Both dose and exam time can be minimized without compromising diagnostic content. METHODS Integration of pixelated CZT detectors with advanced ASICs and readout electronics improves system performance. Because historically CZT was expensive, the first clinical applications were limited to small FOV. Radiation doses were initially high and exam times long. Advances have significantly improved efficiency of CZT-based molecular imaging systems and the cost has steadily declined. We have built a general purpose SPECT system using our 40 cm x 53 cm CZT gamma camera with 2 mm pixel pitch and characterized system performance. RESULTS Compared to NaI scintillator gamma cameras: intrinsic spatial resolution improved from 3.8 mm to 2.0 mm; energy resolution improved from 9.8% to reconstruction, result in minimized dose and exam time. With CZT cost improving, affordable whole-body CZT general purpose SPECT is expected to enable precision medicine applications.

  18. Congestion relief by travel time minimization in near real time : Detroit area I-75 corridor study.

    Science.gov (United States)

    2008-12-01

    "This document summarizes the activities concerning the project: Congestion Relief by : Travel Time Minimization in Near Real Time -- Detroit Area I-75 Corridor Study since : the inception of the project (Nov. 22, 2006 through September 30, 2008). : ...

  19. Standardization of {sup 67}Ga, {sup 51}Cr and {sup 55}Fe by live-timed anti-coincidence counting with extending dead time

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Carlos J. da [Laboratorio Nacional de Metrologia das Radiacoes Ionizantes (LNMRI), Instituto de Radioprotecao e Dosimetria (IRD), Comissao Nacional de Energia Nuclear - CNEN, Av. Salvador Allende, s/n0-Recreio, CEP 22780-160 Rio de Janeiro (Brazil) and Laboratorio de Instrumentacao Nuclear (LIN/PEN/COPPE/UFRJ), Caixa Postal 68590, CEP 21945-970 Rio de Janeiro (Brazil)], E-mail: Carlos@ird.gov.br; Iwahara, A.; Poledna, R.; Bernardes, E.M. de O; Prinzio, M.A.R.R. de [Laboratorio Nacional de Metrologia das Radiacoes Ionizantes (LNMRI), Instituto de Radioprotecao e Dosimetria (IRD), Comissao Nacional de Energia Nuclear - CNEN, Av. Salvador Allende, s/n0-Recreio, CEP 22780-160 Rio de Janeiro (Brazil); Lopes, Ricardo T. [Laboratorio de Instrumentacao Nuclear (LIN/PEN/COPPE/UFRJ), Caixa Postal 68590, CEP 21945-970 Rio de Janeiro (Brazil)

    2008-02-15

    In this work, the activity standardization of {sup 51}Cr, {sup 55}Fe and {sup 67}Ga by live-timed anti-coincidence counting with extending dead time is described. The difficulties of the method, the uncertainties of the results of the measurements and the comparison of these results with others measurement methods are discussed.

  20. A High-Precision Control for a ZVT PWM Soft-Switching Inverter to Eliminate the Dead-Time Effect

    Directory of Open Access Journals (Sweden)

    Baoquan Kou

    2016-07-01

    Full Text Available Attributing to the advantages of high efficiency, low electromagnetic interference (EMI noise and closest to the pulse-width-modulation (PWM converter counterpart, zero-voltage-transition (ZVT PWM soft-switching inverters are very suitable for high-performance applications. However, the conventional control algorithms intended for high efficiency generally results in voltage distortion. Thus, this paper, for the first time, proposes a high-precision control method to eliminate the dead-time effect through controlling the auxiliary current in the auxiliary resonant snubber inverter (ARSI, which is a typical ZVT PWM inverter. The dead-time effect of ARSI is analyzed, which is distinguished from hard-switching inverters. The proposed high-precision control is introduced based on the investigation of dead-time effect. A prototype was developed to verify the effectiveness of the proposed control. The experimental results shows that the total harmonic distortion (THD of the output current of the ARSI can be reduced compared with that of the hard-switching inverter, because the blanking delay error is eliminated. The quality of the output current and voltage can be further improved by utilizing the proposed control method.

  1. The Markov chain method for solving dead time problems in the space dependent model of reactor noise

    International Nuclear Information System (INIS)

    Degweker, S.B.

    1997-01-01

    The discrete time Markov chain approach for deriving the statistics of time-correlated pulses, in the presence of a non-extending dead time, is extended to include the effect of space energy distribution of the neutron field. Equations for the singlet and doublet densities of follower neutrons are derived by neglecting correlations beyond the second order. These equations are solved by the modal method. It is shown that in the unimodal approximation, the equations reduce to the point model equations with suitably defined parameters. (author)

  2. Siting Samplers to Minimize Expected Time to Detection

    Energy Technology Data Exchange (ETDEWEB)

    Walter, Travis; Lorenzetti, David M.; Sohn, Michael D.

    2012-05-02

    We present a probabilistic approach to designing an indoor sampler network for detecting an accidental or intentional chemical or biological release, and demonstrate it for a real building. In an earlier paper, Sohn and Lorenzetti(1) developed a proof of concept algorithm that assumed samplers could return measurements only slowly (on the order of hours). This led to optimal detect to treat architectures, which maximize the probability of detecting a release. This paper develops a more general approach, and applies it to samplers that can return measurements relatively quickly (in minutes). This leads to optimal detect to warn architectures, which minimize the expected time to detection. Using a model of a real, large, commercial building, we demonstrate the approach by optimizing networks against uncertain release locations, source terms, and sampler characteristics. Finally, we speculate on rules of thumb for general sampler placement.

  3. International comparison of methods to test the validity of dead-time and pile-up corrections for high-precision. gamma. -ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Houtermans, H.; Schaerf, K.; Reichel, F. (International Atomic Energy Agency, Vienna (Austria)); Debertin, K. (Physikalisch-Technische Bundesanstalt, Braunschweig (Germany, F.R.))

    1983-02-01

    The International Atomic Energy Agency organized an international comparison of methods applied in high-precision ..gamma..-ray spectrometry for the correction of dead-time and pile-up losses. Results of this comparison are reported and discussed.

  4. Measurement of the front-end dead-time of the LHCb muon detector and evaluation of its contribution to the muon detection inefficiency

    CERN Document Server

    INSPIRE-00357120; Archilli, F.; Auriemma, G.; Baldini, W.; Bencivenni, G.; Bizzeti, A.; Bocci, V.; Bondar, N.; Bonivento, W.; Bochin, B.; Bozzi, C.; Brundu, D.; Cadeddu, S.; Campana, P.; Carboni, G.; Cardini, A.; Carletti, M.; Casu, L.; Chubykin, A.; Ciambrone, P.; Dané, E.; De Simone, P.; Falabella, A.; Felici, G.; Fiore, M.; Fontana, M.; Fresch, P.; Furfaro, E.; Graziani, G.; Kashchuk, A.; Kotriakhova, S.; Lai, A.; Lanfranchi, G.; Loi, A.; Maev, O.; Manca, G.; Martellotti, G.; Neustroev, P.; Oldeman, R.G.C.; Palutan, M.; Passaleva, G.; Penso, G.; Pinci, D.; Polycarpo, E.; Saitta, B.; Santacesaria, R.; Santimaria, M.; Santovetti, E.; Saputi, A.; Sarti, A.; Satriano, C.; Satta, A.; Schmidt, B.; Schneider, T.; Sciascia, B.; Sciubba, A.; Siddi, B.G.; Tellarini, G.; Vacca, C.; Vazquez-Gomez, R.; Vecchi, S.; Veltri, M.; Vorobyev, A.

    2016-04-06

    A method is described which allows to deduce the dead-time of the front-end electronics of the LHCb muon detector from a series of measurements performed at different luminosities at a bunch-crossing rate of 20 MHz. The measured values of the dead-time range from 70 ns to 100 ns. These results allow to estimate the performance of the muon detector at the future bunch-crossing rate of 40 MHz and at higher luminosity.

  5. Design and Test of a 65nm CMOS Front-End with Zero Dead Time for Next Generation Pixel Detectors

    Energy Technology Data Exchange (ETDEWEB)

    Gaioni, L. [INFN, Pavia; Braga, D. [Fermilab; Christian, D. [Fermilab; Deptuch, G. [Fermilab; Fahim. F., Fahim. F. [Fermilab; Nodari, B. [Lyon, IPN; Ratti, L. [INFN, Pavia; Re, V. [INFN, Pavia; Zimmerman, T. [Fermilab

    2017-09-01

    This work is concerned with the experimental characterization of a synchronous analog processor with zero dead time developed in a 65 nm CMOS technology, conceived for pixel detectors at the HL-LHC experiment upgrades. It includes a low noise, fast charge sensitive amplifier with detector leakage compensation circuit, and a compact, single ended comparator able to correctly process hits belonging to two consecutive bunch crossing periods. A 2-bit Flash ADC is exploited for digital conversion immediately after the preamplifier. A description of the circuits integrated in the front-end processor and the initial characterization results are provided

  6. Extension of the Dytlewski-style dead time correction formalism for neutron multiplicity counting to any order

    International Nuclear Information System (INIS)

    Croft, Stephen; Favalli, Andrea

    2017-01-01

    Here, neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where the next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.

  7. Stabilization and analytical tuning rule of double-loop control scheme for unstable dead-time process

    Science.gov (United States)

    Ugon, B.; Nandong, J.; Zang, Z.

    2017-06-01

    The presence of unstable dead-time systems in process plants often leads to a daunting challenge in the design of standard PID controllers, which are not only intended to provide close-loop stability but also to give good performance-robustness overall. In this paper, we conduct stability analysis on a double-loop control scheme based on the Routh-Hurwitz stability criteria. We propose to use this unstable double-loop control scheme which employs two P/PID controllers to control first-order or second-order unstable dead-time processes typically found in process industries. Based on the Routh-Hurwitz stability necessary and sufficient criteria, we establish several stability regions which enclose within them the P/PID parameter values that guarantee close-loop stability of the double-loop control scheme. A systematic tuning rule is developed for the purpose of obtaining the optimal P/PID parameter values within the established regions. The effectiveness of the proposed tuning rule is demonstrated using several numerical examples and the result are compared with some well-established tuning methods reported in the literature.

  8. Neutron coincidence counting based on time interval analysis with dead time corrected one and two dimensional Rossi-alpha distributions: an application for passive neutron waste assay

    International Nuclear Information System (INIS)

    Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.

    1996-03-01

    The report describes a new neutron multiplicity counting method based on Rossi-alpha distributions. The report also gives the necessary dead time correction formulas for the multiplicity counting method. The method was tested numerically using a Monte Carlo simulation of pulse trains. The use of this multiplicity method in the field of waste assay is explained: it can be used to determine the amount of fissile material in a waste drum without prior knowledge of the actual detection efficiency

  9. A GIS-driven integrated real-time surveillance pilot system for national West Nile virus dead bird surveillance in Canada

    Directory of Open Access Journals (Sweden)

    Aramini Jeff

    2006-04-01

    Full Text Available Abstract Background An extensive West Nile virus surveillance program of dead birds, mosquitoes, horses, and human infection has been launched as a result of West Nile virus first being reported in Canada in 2001. Some desktop and web GIS have been applied to West Nile virus dead bird surveillance. There have been urgent needs for a comprehensive GIS services and real-time surveillance. Results A pilot system was developed to integrate real-time surveillance, real-time GIS, and Open GIS technology in order to enhance West Nile virus dead bird surveillance in Canada. Driven and linked by the newly developed real-time web GIS technology, this integrated real-time surveillance system includes conventional real-time web-based surveillance components, integrated real-time GIS components, and integrated Open GIS components. The pilot system identified the major GIS functions and capacities that may be important to public health surveillance. The six web GIS clients provide a wide range of GIS tools for public health surveillance. The pilot system has been serving Canadian national West Nile virus dead bird surveillance since 2005 and is adaptable to serve other disease surveillance. Conclusion This pilot system has streamlined, enriched and enhanced national West Nile virus dead bird surveillance in Canada, improved productivity, and reduced operation cost. Its real-time GIS technology, static map technology, WMS integration, and its integration with non-GIS real-time surveillance system made this pilot system unique in surveillance and public health GIS.

  10. A general dead-time correction method based on live-time stamping. Application to the measurement of short-lived radionuclides.

    Science.gov (United States)

    Chauvenet, B; Bobin, C; Bouchard, J

    2017-12-01

    Dead-time correction formulae are established in the general case of superimposed non-homogeneous Poisson processes. Based on the same principles as conventional live-timed counting, this method exploits the additional information made available using digital signal processing systems, and especially the possibility to store the time stamps of live-time intervals. No approximation needs to be made to obtain those formulae. Estimates of the variances of corrected rates are also presented. This method is applied to the activity measurement of short-lived radionuclides. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Controllers with Minimal Observation Power (Application to Timed Systems)

    DEFF Research Database (Denmark)

    Bulychev, Petr; Cassez, Franck; David, Alexandre

    2012-01-01

    We consider the problem of controller synthesis under imper- fect information in a setting where there is a set of available observable predicates equipped with a cost function. The problem that we address is the computation of a subset of predicates sufficient for control and whose cost is minimal...

  12. Exciting times: Towards a totally minimally invasive paediatric urology service

    OpenAIRE

    Lazarus, John

    2011-01-01

    Following on from the first paediatric laparoscopic nephrectomy in 1992, the growth of minimally invasive ablative and reconstructive procedures in paediatric urology has been dramatic. This article reviews the literature related to laparoscopic dismembered pyeloplasty, optimising posterior urethral valve ablation and intravesical laparoscopic ureteric reimplantation.

  13. Digital instrumentation and management of dead time: first results on a NaI well-type detector setup.

    Science.gov (United States)

    Censier, B; Bobin, C; Bouchard, J; Aubineau-Lanièce, I

    2010-01-01

    The LNE-LNHB is engaged in a development program on digital instrumentation, the first step being the instrumentation of a NaI well-type detector set-up. The prototype acquisition card and its technical specifications are presented together with the first comparison with the classical NIM-based acquisition chain, for counting rates up to 100 kcps. The digital instrumentation is shown to be counting-loss free in this range. This validates the main option adopted in this project, namely the implementation of an extending dead time with live-time measurement already successfully used in the MTR2 NIM module developed at LNE-LNHB. Copyright 2010. Published by Elsevier Ltd.

  14. A Novel Choice Procedure of Magnetic Component Values for Phase Shifted Full Bridge Converters with a Variable Dead-Time Control Method

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2015-09-01

    Full Text Available Magnetic components are important parts of the phase shifted full bridge (PSFB converter. During the dead-time of switches located in the same leg, the converter can achieve zero-voltage-switching (ZVS by using the energies stored in magnetic components to discharge or charge the output capacitances of switches. Dead-time is usually calculated under a given set of pre-defined load condition which results in that the available energies are insufficient and ZVS capability is lost at light loads. In this paper, the PSFB converter is controlled by variable dead-time method and thus full advantage can be taken of the energies stored in magnetic components. Considering that dead-time has a great effect on ZVS, the relationship between available energies and magnetic component values is formulated by analyzing the equivalent circuits during dead-time intervals. Magnetic component values are chosen based on such relationship. The proposed choice procedure can make the available energies greater than the required energies for ZVS operation over a wide range of load conditions. Moreover, the burst mode control is adopted in order to reduce the standby power loss. Experimental results coincide with the theoretical analysis. The proposed method is a simple and practical solution to extend the ZVS range.

  15. Standardization of {sup 241}Am, {sup 124}Sb and {sup 131}I by live-timed anti-coincidence counting with extending dead time

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Carlos J. da [Laboratorio Nacional de Metrologia das Radiacoes Ionizantes (LNMRI), Instituto de Radioprotecao e Dosimetria (IRD), Comissao Nacional de Energia Nuclear - CNEN, Av. Salvador Allende, s/n-Recreio, CEP 22780-160 Rio de Janeiro (Brazil) and Laboratorio de Instrumentacao Nuclear (LIN/PEN/COPPE/UFRJ), Caixa Postal 68590, CEP 21945-970 Rio de Janeiro (Brazil)], E-mail: Carlos@ird.gov.br; Iwahara, A.; Poledna, R.; Oliveira, E.M. de; Prinzio, M.A.R.R. de; Delgado, Jose U. [Laboratorio Nacional de Metrologia das Radiacoes Ionizantes (LNMRI), Instituto de Radioprotecao e Dosimetria (IRD), Comissao Nacional de Energia Nuclear - CNEN, Av. Salvador Allende, s/n-Recreio, CEP 22780-160 Rio de Janeiro (Brazil); Lopes, Ricardo T. [Laboratorio de Instrumentacao Nuclear (LIN/PEN/COPPE/UFRJ), Caixa Postal 68590, CEP 21945-970 Rio de Janeiro (Brazil)

    2008-06-15

    The National Metrology Laboratory for Ionizing Radiation (LNMRI)/Brazil has implemented a live-timed anti-coincidence system with extending dead time to complement the existing systems in its Radionuclide Laboratory for activity measurements of radioactive sources. In this new system, the proportional counter has been replaced by a liquid-scintillation-counter for alpha and beta detection. In order to test the performance of the new system, radioactive solutions of {sup 131}I, {sup 124}Sb and {sup 241}Am have been standardized. In this work the measurement method, the results and the associated uncertainties are described and discussed.

  16. A Markov Chain approach for deriving the statistics of time-correlated pulses in the presence of non-extendible dead time

    International Nuclear Information System (INIS)

    Degweker, S.B.

    1996-01-01

    The problem of deriving the statistics of time-correlated detector pulses in the presence of a non-extendible dead time is studied by constructing a Markov Chain to describe the process. Expressions for the transition matrix are derived for problems in the passive neutron assay of Pu and (zero-power) reactor noise. Perturbative and numerical solutions of the master equations are discussed for a simple problem in the passive neutron assay of Pu. Expressions for the mean count rate and variance in a given interval are derived. (Author)

  17. Success and failure of dead-time models as applied to hybrid pixel detectors in high-flux applications

    International Nuclear Information System (INIS)

    Sobott, B. A.; Broennimann, Ch.; Schmitt, B.; Trueb, P.; Schneebeli, M.; Lee, V.; Peake, D. J.; Elbracht-Leong, S.; Schubert, A.; Kirby, N.; Boland, M. J.; Chantler, C. T.; Barnea, Z.; Rassool, R. P.

    2013-01-01

    Detector response functionals are found to have useful but also limited application to synchrotron studies where bunched fills are becoming common. By matching the detector response function to the source temporal structure, substantial improvements in efficiency, count rate and linearity are possible. The performance of a single-photon-counting hybrid pixel detector has been investigated at the Australian Synchrotron. Results are compared with the body of accepted analytical models previously validated with other detectors. Detector functionals are valuable for empirical calibration. It is shown that the matching of the detector dead-time with the temporal synchrotron source structure leads to substantial improvements in count rate and linearity of response. Standard implementations are linear up to ∼0.36 MHz pixel −1 ; the optimized linearity in this configuration has an extended range up to ∼0.71 MHz pixel −1 ; these are further correctable with a transfer function to ∼1.77 MHz pixel −1 . This new approach has wide application both in high-accuracy fundamental experiments and in standard crystallographic X-ray fluorescence and other X-ray measurements. The explicit use of data variance (rather than N 1/2 noise) and direct measures of goodness-of-fit (χ r 2 ) are introduced, raising issues not encountered in previous literature for any detector, and suggesting that these inadequacies of models may apply to most detector types. Specifically, parametrization of models with non-physical values can lead to remarkable agreement for a range of count-rate, pulse-frequency and temporal structure. However, especially when the dead-time is near resonant with the temporal structure, limitations of these classical models become apparent. Further, a lack of agreement at extreme count rates was evident

  18. Dead Man or Dead Hand?

    DEFF Research Database (Denmark)

    Ringe, Wolf-Georg

    and potential takeover bids. Recent Delaware case-law suggests that the most extreme, ‘dead hand’ version of such clauses might violate directors’ fiduciary duties. This short article develops some initial thoughts on the phenomenon and evaluates how the new poison pills would be handled under European takeover...

  19. Fit of experimental points to the sum of two (or one) exponentials with background. Program for ODRA 1305 computer. Part 2: for time analysers with constant dead time after each registered pulse (AC-256 type)

    International Nuclear Information System (INIS)

    Drozdowicz, K.; Krynicka-Drozdowicz, E.

    1979-01-01

    The LAMA program (in FORTRAN 1900 language), which fits the set of decaying experimental values to the sum of the two (or one) exponentials with background, is described. The method of calculation and its accuracy and the interpretation of the program results are given. The changes and the extensions of the calculation, referred to the dead time effect taken into account for time analysers having the constant dead time after each registered pulse, are described. (author)

  20. Developments in gamma-ray spectrometry: systems, software, and methods-II. 5. Performance of the Zero-Dead-Time Mode of the DSPEC Plus

    International Nuclear Information System (INIS)

    Keyser, Ronald M.; Twomey, Timothy R.; Bingham, Russell D.

    2001-01-01

    The processing of each input pulse causes dead time in all nuclear spectroscopy systems, even the new units based on digital signal processing. System dead time results from the fact that during the time period that the system is processing one pulse, it cannot process any subsequent pulses. All the components (amplifier, digitization, storage) of the signal-processing chain contribute to the total dead time. Previously, several techniques were developed to compensate for the dead-time losses in a spectroscopy system. The two most common types are live-time extension and loss-free counting. The live-time extension technique records the time that the system is dead and extends the counting time to collect more counts to compensate for the loss of counts when the system was dead. The live-time extension technique gives accurate results when measuring samples where the activity remains approximately constant during the measuring process (i.e., the dead time does not change significantly during a single measurement period). In this way, the counts collected in the extended time are representative of the counts lost during the previous counting time. The loss-free counting method of correcting for dead-time losses, as introduced by Harms and improved by Westphal gives better results than live-time extension techniques in cases where the counting rate changes significantly during the measurement. It makes a loss-free spectrum by estimating the number of counts lost during a dead-time increment and adding this number to the channel corresponding to the energy of the just-processed pulse instead of the normal one count. However, this current method of adding counts in loss-free counting systems produces spectra where the data no longer obey Poisson statistics; that is, the uncertainty in a channel with N counts is not N 1/2 . Because of this, the calculation of the uncertainty associated with the spectral counts is not easy to determine. It may not be possible, in general

  1. Minimizing total weighted completion time in a proportionate flow shop

    NARCIS (Netherlands)

    Shakhlevich, N.V.; Hoogeveen, J.A.; Pinedo, M.L.

    1998-01-01

    We study the special case of the m machine flow shop problem in which the processing time of each operation of job j is equal to pj; this variant of the flow shop problem is known as the proportionate flow shop problem. We show that for any number of machines and for any regular performance

  2. Measurable Disturbances Compensation: Analysis and Tuning of Feedforward Techniques for Dead-Time Processes

    Directory of Open Access Journals (Sweden)

    Andrzej Pawlowski

    2016-04-01

    Full Text Available In this paper, measurable disturbance compensation techniques are analyzed, focusing the problem on the input-output and disturbance-output time delays. The feedforward compensation method is evaluated for the common structures that appear between the disturbance and process dynamics. Due to the presence of time delays, the study includes causality and instability phenomena that can arise when a classical approach for disturbance compensation is used. Different feedforward configurations are analyzed for two feedback control techniques, PID (Proportional-Integral-Derivative and MPC (Model Predictive Control that are widely used for industrial process-control applications. The specific tuning methodology for the analyzed process structure is used to obtain improved disturbance rejection performance regarding classical approaches. The evaluation of the introduced disturbance rejection schemes is performed through simulation, considering process constraints in order to highlight the advantages and drawbacks in common scenarios. The performance of the analyzed structure is expressed with different indexes that allow us direct comparisons. The obtained results show that the proper design and tuning of the feedforward action helps to significantly improve the overall control performance in process control tasks.

  3. Time’s Deadly Arrow: Time and Temporality in Narratives of Immaterial Labor

    Directory of Open Access Journals (Sweden)

    Sabine von Dirke

    2016-01-01

    Full Text Available The article investigates the discourse on time and temporality in non-fictional and fictional accounts of paid, white collar labor, or, in the broader terminology of Maurizio Lazzarato, immaterial labor since the last quarter of the twentieth century. More specifically, it brings the critique of neoliberal capitalism by two influential social philosophers, Richard Sennett and Oskar Negt, in dialogue with fictional narratives of white collar labor: Rainer Merkel’s novel, Das Jahr der Wunder (The Year of Miracles, 2001, W.E. Richartz’s Büroroman (Office Novel, 1976 and Wilhelm Genazino’s A bschaffel-trilogy (1977-1979. Sennett and Negt’s non-fictional accounts contrast living and working conditions under the current hyper-flexible, neoliberal market economy to an earlier mode of a socially responsible capitalism. The latter is often nostalgically depicted as a golden age of a state-regulated labor market designed to protect the majority of the working population from exploitation and economic hardship. Yet, narratives on the working world published during the 1970s call into question the assumption that state-regulated work hours offer better conditions for lived time and the constitution of subjecthood.

  4. Teaching at the Bedside. Maximal Impact in Minimal Time.

    Science.gov (United States)

    Carlos, William G; Kritek, Patricia A; Clay, Alison S; Luks, Andrew M; Thomson, Carey C

    2016-04-01

    Academic physicians encounter many demands on their time including patient care, quality and performance requirements, research, and education. In an era when patient volume is prioritized and competition for research funding is intense, there is a risk that medical education will become marginalized. Bedside teaching, a responsibility of academic physicians regardless of professional track, is challenged in particular out of concern that it generates inefficiency, and distractions from direct patient care, and can distort physician-patient relationships. At the same time, the bedside is a powerful location for teaching as learners more easily engage with educational content when they can directly see its practical relevance for patient care. Also, bedside teaching enables patients and family members to engage directly in the educational process. Successful bedside teaching can be aided by consideration of four factors: climate, attention, reasoning, and evaluation. Creating a safe environment for learning and patient care is essential. We recommend that educators set expectations about use of medical jargon and engagement of the patient and family before they enter the patient room with trainees. Keep learners focused by asking relevant questions of all members of the team and by maintaining a collective leadership style. Assess and model clinical reasoning through a hypothesis-driven approach that explores the rationale for clinical decisions. Focused, specific, real-time feedback is essential for the learner to modify behaviors for future patient encounters. Together, these strategies may alleviate challenges associated with bedside teaching and ensure it remains a part of physician practice in academic medicine.

  5. Spatial and time variations of radon-222 concentration in the atmosphere of a dead-end horizontal tunnel

    International Nuclear Information System (INIS)

    Richon, Patrick; Perrier, Frederic; Sabroux, Jean-Christophe; Trique, Michaeel; Ferry, Cecile; Voisin, Vincent; Pili, Eric

    2004-01-01

    The concentration of radon-222 has been monitored since 1995 in the atmosphere of a 2 m transverse dimension, 128 m long, dead-end horizontal tunnel located in the French Alps, at an altitude of 1600 m. Most of the time, the radon concentration is stable, with an average value ranging from 200 Bq m -3 near the entrance to about 1000 Bq m -3 in the most confined section, with an equilibrium factor between radon and its short-lived decay products varying from 0.61 to 0.78. However, radon bursts are repeatedly observed, with amplitudes reaching up to 36 x 10 3 Bq m -3 and durations varying from one to several weeks, with similar spatial variations along the tunnel as the background concentration. These spatial variations are qualitatively interpreted in terms of natural ventilation. Comparing the radon background concentration with the measured radon exhalation flux at the wall yields an estimate of 8 ± 2 x 10 -6 s -1 (0.03 ± 0.007 h -1 ) for the ventilation rate. The hypothesis that the bursts could be due to transient changes in ventilation can be ruled out. Thus, the bursts are the results of transient increased radon exhalation at the walls, that could be due to meteorological effects or possibly combined hydrological and mechanical forcing associated with the water level variations of the nearby Roselend reservoir lake. Such studies are of interest for radiation protection in poorly ventilated underground settings, and, ultimately, for a better understanding of radon exhalation associated with tectonic or volcanic processes

  6. Landsat time series and lidar as predictors of live and dead basal area across five bark beetle-affected forests

    Science.gov (United States)

    Benjamin C. Bright; Andrew T. Hudak; Robert E. Kennedy; Arjan J. H. Meddens

    2014-01-01

    Bark beetle-caused tree mortality affects important forest ecosystem processes. Remote sensing methodologies that quantify live and dead basal area (BA) in bark beetle-affected forests can provide valuable information to forest managers and researchers. We compared the utility of light detection and ranging (lidar) and the Landsat-based detection of trends in...

  7. Time Management in the Operating Room: An Analysis of the Dedicated Minimally Invasive Surgery Suite

    Science.gov (United States)

    Hsiao, Kenneth C.; Machaidze, Zurab

    2004-01-01

    Background: Dedicated minimally invasive surgery suites are available that contain specialized equipment to facilitate endoscopic surgery. Laparoscopy performed in a general operating room is hampered by the multitude of additional equipment that must be transported into the room. The objective of this study was to compare the preparation times between procedures performed in traditional operating rooms versus dedicated minimally invasive surgery suites to see whether operating room efficiency is improved in the specialized room. Methods: The records of 50 patients who underwent laparoscopic procedures between September 2000 and April 2002 were retrospectively reviewed. Twenty-three patients underwent surgery in a general operating room and 18 patients in an minimally invasive surgery suite. Nine patients were excluded because of cystoscopic procedures undergone prior to laparoscopy. Various time points were recorded from which various time intervals were derived, such as preanesthesia time, anesthesia induction time, and total preparation time. A 2-tailed, unpaired Student t test was used for statistical analysis. Results: The mean preanesthesia time was significantly faster in the minimally invasive surgery suite (12.2 minutes) compared with that in the traditional operating room (17.8 minutes) (P=0.013). Mean anesthesia induction time in the minimally invasive surgery suite (47.5 minutes) was similar to time in the traditional operating room (45.7 minutes) (P=0.734). The average total preparation time for the minimally invasive surgery suite (59.6 minutes) was not significantly faster than that in the general operating room (63.5 minutes) (P=0.481). Conclusion: The amount of time that elapses between the patient entering the room and anesthesia induction is statically shorter in a dedicated minimally invasive surgery suite. Laparoscopic surgery is performed more efficiently in a dedicated minimally invasive surgery suite versus a traditional operating room. PMID

  8. Deadly progress

    International Nuclear Information System (INIS)

    Nader, R.; Abbotts, J.

    1978-01-01

    Nuclear power plants are safe, they help to get through the future bottle-neck in the field of energy, nuclear power plants provide for cheap electrical power and support economic growth - these are the sedative formulae which have been used for years to close the populations eyes towards the real problems. In this book, the American lawyer Ralph Nader and the nuclear chemist John Abbots not only oppose this myth of atomic safety, but they also defeat this theory with numerous technical, economic, and political details. Having realized the fact that the development of atomic energy can no longer be prevented by warnings of independent experts, but only by massive protests by the population - i.e. the protest by informed persons-, they give an understandable introduction to the techniques of atomic energy, construction of nuclear power plants, radioactive radiation, safety, etc. Furthermore, they inform about the social, political, and economic background of the nuclear power forcing. Nader and Abbots show the uncertainty of science, they bring secret documents about failures already occured and point out the catastrophic consequences of possible defects. The result of the thorough study: A 'technologic Vietnam' impends both USA and all other nuclear power countries, if the population won't struggle against this dead-end programme of the governments. (orig./HP) [de

  9. Dead zone characteristics of a gas counter

    International Nuclear Information System (INIS)

    Nohtomi, Akihiro; Sakae, Takeji; Matoba, Masaru; Koori, Norihiko.

    1990-01-01

    The dead zone was recently defined as the product of dead length and dead time in order to describe the characteristics of the self-quenching streamer (SQS) mode of a gas counter. Investigation of the dead zone characteristics has been extended for the proportional and GM modes, and the measured dead zone has been compared with that of the SQS mode. Accurate values for the dead zone could be determined by means of a newly developed method with a pulse interval time to amplitude converter. Each operation mode indicates distinct dead zone characteristics. Properties of gas counters for high counting rates may be improved on the basis of measurements of the dead zone. (author)

  10. Love the dead, fear the dead

    DEFF Research Database (Denmark)

    Seebach, Sophie Hooge

    2017-01-01

    The dead are everywhere in the landscape in Acholi, northern Uganda. In the homes, the dead are present through their gravesites, situated next to houses and huts, and as spiritual presences in their family’s daily lives. In the bush, the dead are present as a constant potentiality, in the form...

  11. The Dead Sea

    Science.gov (United States)

    2006-01-01

    The Dead Sea is the lowest point on Earth at 418 meters below sea level, and also one of the saltiest bodies of water on Earth with a salinity of about 300 parts-per-thousand (nine times greater than ocean salinity). It is located on the border between Jordan and Israel, and is fed by the Jordan River. The Dead Sea is located in the Dead Sea Rift, formed as a result of the Arabian tectonic plate moving northward away from the African Plate. The mineral content of the Dead Sea is significantly different from that of ocean water, consisting of approximately 53% magnesium chloride, 37% potassium chloride and 8% sodium chloride. In the early part of the 20th century, the Dead Sea began to attract interest from chemists who deduced that the Sea was a natural deposit of potash and bromine. From the Dead Sea brine, Israel and Jordan produce 3.8 million tons potash, 200,000 tons elemental bromine, 45,000 tons caustic soda, 25, 000 tons magnesium metal, and sodium chloride. Both countries use extensive salt evaporation pans that have essentially diked the entire southern end of the Dead Sea. With its 14 spectral bands from the visible to the thermal infrared wavelength region, and its high spatial resolution of 15 to 90 meters (about 50 to 300 feet), ASTER images Earth to map and monitor the changing surface of our planet. ASTER is one of five Earth-observing instruments launched December 18, 1999, on NASA's Terra satellite. The instrument was built by Japan's Ministry of Economy, Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and the data products. The broad spectral coverage and high spectral resolution of ASTER provides scientists in numerous disciplines with critical information for surface mapping, and monitoring of dynamic conditions and temporal change. Example applications are: monitoring glacial advances and retreats; monitoring potentially active volcanoes; identifying crop stress; determining

  12. Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness

    Science.gov (United States)

    Kusuma, K. K.; Maruf, A.

    2016-02-01

    Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.

  13. Multi-objective optimization model of CNC machining to minimize processing time and environmental impact

    Science.gov (United States)

    Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad

    2017-11-01

    Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.

  14. One-machine job-scheduling with non-constant capacity - Minimizing weighted completion times

    NARCIS (Netherlands)

    Amaddeo, H.F.; Amaddeo, H.F.; Nawijn, W.M.; van Harten, Aart

    1997-01-01

    In this paper an n-job one-machine scheduling problem is considered, in which the machine capacity is time-dependent and jobs are characterized by their work content. The objective is to minimize the sum of weighted completion times. A necessary optimality condition is presented and we discuss some

  15. Who's Counting Dead Wood ?

    OpenAIRE

    Woodall, C. W.; Verkerk, H.; Rondeux, Jacques; Ståhl, G.

    2009-01-01

    Dead wood in forests is a critical component of biodiversity, carbon and nutrient cycles, stand structure, and fuel loadings. Until recently, very few countries have conducted systematic inventories of dead wood resources across their forest lands. This may be changing as an increasing number of countries implement dead wood inventories. A recent survey looks at the status and attributes of forest dead wood inventories in over 60 countries. About 13 percent of countries inventory dead wood gl...

  16. Foraging site selection of two subspecies of Bar-tailed Godwit Limosa lapponica: time minimizers accept greater predation danger than energy minimizers

    NARCIS (Netherlands)

    Duijns, S.; Dijk, van J.G.B.; Spaans, B.; Jukema, J.; Boer, de W.F.; Piersma, Th.

    2009-01-01

    Different spatial distributions of food abundance and predators may urge birds to make a trade-off between food intake and danger. Such a trade-off might be solved in different ways in migrant birds that either follow a time-minimizing or energy-minimizing strategy; these strategies have been

  17. Foraging site selection of two subspecies of Bar-tailed Godwit Limosa lapponica : time minimizers accept greater predation danger than energy minimizers

    NARCIS (Netherlands)

    Duijns, Sjoerd; van Dijk, Jacintha G. B.; Spaans, Bernard; Jukema, Joop; de Boer, Willem F.; Piersma, Theunis

    2009-01-01

    Different spatial distributions Of food abundance and predators may urge birds to make a trade-off between food intake and danger. Such a trade-off might be solved in different ways in migrant birds that either follow a time-minimizing or energy-minimizing strategy; these strategies have been

  18. Performance comparison of optimal fractional order hybrid fuzzy PID controllers for handling oscillatory fractional order processes with dead time.

    Science.gov (United States)

    Das, Saptarshi; Pan, Indranil; Das, Shantanu

    2013-07-01

    Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Feynman-α technique for measurement of detector dead time using a 30 kW tank-in-pool research reactor

    International Nuclear Information System (INIS)

    Akaho, E.H.K.; Intsiful, J.D.K.; Maakuu, B.T.; Anim-Sampong, S.; Nyarko, B.J.B.

    2002-01-01

    Reactor noise analysis was carried out for Ghana Research Reactor-1 GHARR-1, a tank-in-pool type reactor using the Feynman-α technique (variance-to-mean method). Measurements made at different detector positions and under subcritical conditions showed that the technique could not be used to determine the prompt decay constant for the reactor which is Be reflected with photo-neutron background. However, for very low dwell times the technique was used to measure the dead time of the detector which compares favourably with the value obtained using the α-conventional method

  20. Feynman-alpha technique for measurement of detector dead time using a 30 kW tank-in-pool research reactor

    CERN Document Server

    Akaho, E H K; Intsiful, J D K; Maakuu, B T; Nyarko, B J B

    2002-01-01

    Reactor noise analysis was carried out for Ghana Research Reactor-1 GHARR-1, a tank-in-pool type reactor using the Feynman-alpha technique (variance-to-mean method). Measurements made at different detector positions and under subcritical conditions showed that the technique could not be used to determine the prompt decay constant for the reactor which is Be reflected with photo-neutron background. However, for very low dwell times the technique was used to measure the dead time of the detector which compares favourably with the value obtained using the alpha-conventional method.

  1. An optimization based method for line planning to minimize travel time

    DEFF Research Database (Denmark)

    Bull, Simon Henry; Lusby, Richard Martin; Larsen, Jesper

    2015-01-01

    The line planning problem is to select a number of lines from a potential pool which provides sufficient passenger capacity and meets operational requirements, with some objective measure of solution line quality. We model the problem of minimizing the average passenger system time, including...

  2. Analysis of labor employment assessment on production machine to minimize time production

    Science.gov (United States)

    Hernawati, Tri; Suliawati; Sari Gumay, Vita

    2018-03-01

    Every company both in the field of service and manufacturing always trying to pass efficiency of it’s resource use. One resource that has an important role is labor. Labor has different efficiency levels for different jobs anyway. Problems related to the optimal allocation of labor that has different levels of efficiency for different jobs are called assignment problems, which is a special case of linear programming. In this research, Analysis of Labor Employment Assesment on Production Machine to Minimize Time Production, in PT PDM is done by using Hungarian algorithm. The aim of the research is to get the assignment of optimal labor on production machine to minimize time production. The results showed that the assignment of existing labor is not suitable because the time of completion of the assignment is longer than the assignment by using the Hungarian algorithm. By applying the Hungarian algorithm obtained time savings of 16%.

  3. Identifying Time Periods of Minimal Thermal Gradient for Temperature-Driven Structural Health Monitoring

    Directory of Open Access Journals (Sweden)

    John Reilly

    2018-03-01

    Full Text Available Temperature changes play a large role in the day to day structural behavior of structures, but a smaller direct role in most contemporary Structural Health Monitoring (SHM analyses. Temperature-Driven SHM will consider temperature as the principal driving force in SHM, relating a measurable input temperature to measurable output generalized strain (strain, curvature, etc. and generalized displacement (deflection, rotation, etc. to create three-dimensional signatures descriptive of the structural behavior. Identifying time periods of minimal thermal gradient provides the foundation for the formulation of the temperature–deformation–displacement model. Thermal gradients in a structure can cause curvature in multiple directions, as well as non-linear strain and stress distributions within the cross-sections, which significantly complicates data analysis and interpretation, distorts the signatures, and may lead to unreliable conclusions regarding structural behavior and condition. These adverse effects can be minimized if the signatures are evaluated at times when thermal gradients in the structure are minimal. This paper proposes two classes of methods based on the following two metrics: (i the range of raw temperatures on the structure, and (ii the distribution of the local thermal gradients, for identifying time periods of minimal thermal gradient on a structure with the ability to vary the tolerance of acceptable thermal gradients. The methods are tested and validated with data collected from the Streicker Bridge on campus at Princeton University.

  4. Identifying Time Periods of Minimal Thermal Gradient for Temperature-Driven Structural Health Monitoring.

    Science.gov (United States)

    Reilly, John; Glisic, Branko

    2018-03-01

    Temperature changes play a large role in the day to day structural behavior of structures, but a smaller direct role in most contemporary Structural Health Monitoring (SHM) analyses. Temperature-Driven SHM will consider temperature as the principal driving force in SHM, relating a measurable input temperature to measurable output generalized strain (strain, curvature, etc.) and generalized displacement (deflection, rotation, etc.) to create three-dimensional signatures descriptive of the structural behavior. Identifying time periods of minimal thermal gradient provides the foundation for the formulation of the temperature-deformation-displacement model. Thermal gradients in a structure can cause curvature in multiple directions, as well as non-linear strain and stress distributions within the cross-sections, which significantly complicates data analysis and interpretation, distorts the signatures, and may lead to unreliable conclusions regarding structural behavior and condition. These adverse effects can be minimized if the signatures are evaluated at times when thermal gradients in the structure are minimal. This paper proposes two classes of methods based on the following two metrics: (i) the range of raw temperatures on the structure, and (ii) the distribution of the local thermal gradients, for identifying time periods of minimal thermal gradient on a structure with the ability to vary the tolerance of acceptable thermal gradients. The methods are tested and validated with data collected from the Streicker Bridge on campus at Princeton University.

  5. Power Minimization for Parallel Real-Time Systems with Malleable Jobs and Homogeneous Frequencies

    OpenAIRE

    Paolillo, Antonio; Goossens, Joël; Hettiarachchi, Pradeep M.; Fisher, Nathan

    2014-01-01

    In this work, we investigate the potential benefit of parallelization for both meeting real-time constraints and minimizing power consumption. We consider malleable Gang scheduling of implicit-deadline sporadic tasks upon multiprocessors. By extending schedulability criteria for malleable jobs to DVFS-enabled multiprocessor platforms, we are able to derive an offline polynomial-time optimal processor/frequency-selection algorithm. Simulations of our algorithm on randomly generated task system...

  6. Implementation of lean construction techniques for minimizing the risks effect on project construction time

    Directory of Open Access Journals (Sweden)

    Usama Hamed Issa

    2013-12-01

    Full Text Available The construction projects involve various risk factors which have various impacts on time objective that may lead to time-overrun. This study suggests and applies a new technique for minimizing risk factors effect on time using lean construction principles. The lean construction is implemented in this study using the last planner system through execution of an industrial project in Egypt. Evaluating the effect of using the new tool is described in terms of two measurements: Percent Expected Time-overrun (PET and Percent Plan Completed (PPC. The most important risk factors are identified and assessed, while PET is quantified at the project start and during the project execution using a model for time-overrun quantification. The results showed that total project time is reduced by 15.57% due to decreasing PET values, while PPC values improved. This is due to minimizing and mitigating the effect of most of the risk factors in this project due to implementing lean construction techniques. The results proved that the quantification model is suitable for evaluating the effect of using lean construction techniques. In addition, the results showed that average value of PET due to factors affected by lean techniques represents 67% from PET values due to all minimized risk factors.

  7. Minimizing Total Completion Time For Preemptive Scheduling With Release Dates And Deadline Constraints

    Directory of Open Access Journals (Sweden)

    He Cheng

    2014-02-01

    Full Text Available It is known that the single machine preemptive scheduling problem of minimizing total completion time with release date and deadline constraints is NP- hard. Du and Leung solved some special cases by the generalized Baker's algorithm and the generalized Smith's algorithm in O(n2 time. In this paper we give an O(n2 algorithm for the special case where the processing times and deadlines are agreeable. Moreover, for the case where the processing times and deadlines are disagreeable, we present two properties which could enable us to reduce the range of the enumeration algorithm

  8. Minimizing the effect of process mismatch in a neuromorphic system using spike-timing-dependent adaptation.

    Science.gov (United States)

    Cameron, Katherine; Murray, Alan

    2008-05-01

    This paper investigates whether spike-timing-dependent plasticity (STDP) can minimize the effect of mismatch within the context of a depth-from-motion algorithm. To improve noise rejection, this algorithm contains a spike prediction element, whose performance is degraded by analog very large scale integration (VLSI) mismatch. The error between the actual spike arrival time and the prediction is used as the input to an STDP circuit, to improve future predictions. Before STDP adaptation, the error reflects the degree of mismatch within the prediction circuitry. After STDP adaptation, the error indicates to what extent the adaptive circuitry can minimize the effect of transistor mismatch. The circuitry is tested with static and varying prediction times and chip results are presented. The effect of noisy spikes is also investigated. Under all conditions the STDP adaptation is shown to improve performance.

  9. Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors

    Science.gov (United States)

    Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.

    1994-10-01

    This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.

  10. Optimal post-warranty maintenance policy with repair time threshold for minimal repair

    International Nuclear Information System (INIS)

    Park, Minjae; Mun Jung, Ki; Park, Dong Ho

    2013-01-01

    In this paper, we consider a renewable minimal repair–replacement warranty policy and propose an optimal maintenance model after the warranty is expired. Such model adopts the repair time threshold during the warranty period and follows with a certain type of system maintenance policy during the post-warranty period. As for the criteria for optimality, we utilize the expected cost rate per unit time during the life cycle of the system, which has been frequently used in many existing maintenance models. Based on the cost structure defined for each failure of the system, we formulate the expected cost rate during the life cycle of the system, assuming that a renewable minimal repair–replacement warranty policy with the repair time threshold is provided to the user during the warranty period. Once the warranty is expired, the maintenance of the system is the user's sole responsibility. The life cycle of the system is defined on the perspective of the user and the expected cost rate per unit time is derived in this context. We obtain the optimal maintenance policy during the maintenance period following the expiration of the warranty period by minimizing such a cost rate. Numerical examples using actual failure data are presented to exemplify the applicability of the methodologies proposed in this paper.

  11. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    Science.gov (United States)

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  12. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    Directory of Open Access Journals (Sweden)

    Shih-Wei Lin

    2014-01-01

    Full Text Available Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP, which aims to minimize total service time, and proposes an iterated greedy (IG algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set.

  13. Optimizing Ship Speed to Minimize Total Fuel Consumption with Multiple Time Windows

    Directory of Open Access Journals (Sweden)

    Jae-Gon Kim

    2016-01-01

    Full Text Available We study the ship speed optimization problem with the objective of minimizing the total fuel consumption. We consider multiple time windows for each port call as constraints and formulate the problem as a nonlinear mixed integer program. We derive intrinsic properties of the problem and develop an exact algorithm based on the properties. Computational experiments show that the suggested algorithm is very efficient in finding an optimal solution.

  14. Minimal time spiking in various ChR2-controlled neuron models.

    Science.gov (United States)

    Renault, Vincent; Thieullen, Michèle; Trélat, Emmanuel

    2018-02-01

    We use conductance based neuron models, and the mathematical modeling of optogenetics to define controlled neuron models and we address the minimal time control of these affine systems for the first spike from equilibrium. We apply tools of geometric optimal control theory to study singular extremals, and we implement a direct method to compute optimal controls. When the system is too large to theoretically investigate the existence of singular optimal controls, we observe numerically the optimal bang-bang controls.

  15. Single machine total completion time minimization scheduling with a time-dependent learning effect and deteriorating jobs

    Science.gov (United States)

    Wang, Ji-Bo; Wang, Ming-Zheng; Ji, Ping

    2012-05-01

    In this article, we consider a single machine scheduling problem with a time-dependent learning effect and deteriorating jobs. By the effects of time-dependent learning and deterioration, we mean that the job processing time is defined by a function of its starting time and total normal processing time of jobs in front of it in the sequence. The objective is to determine an optimal schedule so as to minimize the total completion time. This problem remains open for the case of -1 < a < 0, where a denotes the learning index; we show that an optimal schedule of the problem is V-shaped with respect to job normal processing times. Three heuristic algorithms utilising the V-shaped property are proposed, and computational experiments show that the last heuristic algorithm performs effectively and efficiently in obtaining near-optimal solutions.

  16. Assessing and minimizing contamination in time of flight based validation data

    Science.gov (United States)

    Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald

    2017-10-01

    Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.

  17. The continuous reaction times method for diagnosing, grading, and monitoring minimal/covert hepatic encephalopathy

    DEFF Research Database (Denmark)

    Lauridsen, Mette Enok Munk; Thiele, Maja; Kimer, N

    2013-01-01

    Abstract Existing tests for minimal/covert hepatic encephalopathy (m/cHE) are time- and expertise consuming and primarily useable for research purposes. An easy-to-use, fast and reliable diagnostic and grading tool is needed. We here report on the background, experience, and ongoing research......-10) percentile) as a parameter of reaction time variability. The index is a measure of alertness stability and is used to assess attention and cognition deficits. The CRTindex identifies half of patients in a Danish cohort with chronic liver disease, as having m/cHE, a normal value safely precludes HE, it has...

  18. Time-to-administration in postoperative chemotherapy for colorectal cancer: does minimally-invasive surgery help?

    Science.gov (United States)

    Amore Bonapasta, Stefano; Checcacci, Paolo; Guerra, Francesco; Mirasolo, Vita M; Moraldi, Luca; Ferrara, Angelo; Annecchiarico, Mario; Coratti, Andrea

    2016-06-01

    The optimal delay in the start of chemotherapy following rectal cancer surgery has not yet been identified. However, postponed adjuvant therapy has been proven to be connected with a significant survival detriment. We aimed to investigate whether the time to initiation of adjuvant treatment can be influenced by the application of minimally invasive surgery rather than traditional open surgery. By comprehensively evaluating the available inherent literature, several factors appear to be associated with delayed postoperative chemotherapy. Some of them are strictly related to surgical short-term outcomes. Laparoscopy results in shortened length of hospital stay, reduced surgical morbidity and lower rate of wound infection compared to conventional surgery. Probably due to such advantages, the application of minimally-invasive surgery to treat rectal malignancies seems to impact favorably the possibility to start adjuvant chemotherapy within an adequate timeframe following surgical resection, with potential improvement in patient survival.

  19. What are the important manoeuvres for beginners to minimize surgical time in primary total knee arthroplasty?

    Science.gov (United States)

    Harato, Kengo; Maeno, Shinichi; Tanikawa, Hidenori; Kaneda, Kazuya; Morishige, Yutaro; Nomoto, So; Niki, Yasuo

    2016-08-01

    It was hypothesized that surgical time of beginners would be much longer than that of experts. Our purpose was to investigate and clarify the important manoeuvres for beginners to minimize surgical time in primary total knee arthroplasty (TKA) as a multicentre study. A total of 300 knees in 248 patients (averaged 74.6 years) were enrolled. All TKAs were done using the same instruments and the same measured resection technique at 14 facilities by 25 orthopaedic surgeons. Surgeons were divided into three surgeon groups (four experts, nine medium-volume surgeons and 12 beginners). The surgical technique was divided into five phases. Detailed surgical time and ratio of the time in each phase to overall surgical time were recorded and compared among the groups in each phase. A total of 62, 119, and 119 TKAs were done by beginners, medium-volume surgeons, and experts, respectively. Significant differences in surgical time among the groups were seen in each phase. Concerning the ratio of the time, experts and medium-volume surgeons seemed cautious in fixation of the permanent component compared to other phases. Interestingly, even in ratio, beginners and medium-volume surgeons took more time in exposure of soft tissue compared to experts. (0.14 in beginners, 0.13 in medium-volume surgeons, 0.11 in experts, P time in exposure and closure of soft tissue compared to experts. Improvement in basic technique is essential to minimize surgical time among beginners. First of all, surgical instructors should teach basic techniques in primary TKA for beginners. Therapeutic studies, Level IV.

  20. A paeudorandom pulser technique for the correction of dead-time and pile-up losses in γ-ray spectrometry

    International Nuclear Information System (INIS)

    Doerfel, G.; Kluge, W.; Kubsch, M.

    1983-01-01

    A pseudorandom pulser and its application in high precision gamma-ray spectrometry is described. A pulse train suitable for modelling dead-time and pile-up effects is obtained by an AND-connection of delayed pulse sequences delivered by a maximum length shift register generator. The heart of the problem consists in finding the appropriate delay indices and in implementing these indices by 'add and shift'. The related conditions and rules are described. These conditions ensure the occurrence of multiple pulses according to the binomial and Poisson distribution, respectively, within a predetermined range of multiplicity as well as ensuring other statistical properties. Results are given in a form comparable with the description of the results of a well-known international test. (orig.)

  1. Digital instrumentation and dead-time processing for radionuclide metrology; Instrumentation et gestion numerique des temps morts pour la metrologie de la radioactivite

    Energy Technology Data Exchange (ETDEWEB)

    Censier, B.; Bobin, Ch.; Bouchard, J. [CEA Saclay, LIST, Laboratoire national Henri Becquerel (LNE-LNHB), 91 - Gif-sur-Yvette (France)

    2010-07-01

    Most of the acquisition chains used in radionuclide metrology are based on NIM modules. These analogue setups have been thoroughly tested for decades now, becoming a reference in the field. Nevertheless, the renewal of ageing modules and the need for extra features both call for the development of new acquisition schemes based on digital processing. In this article, several technologies usable for instrumentation are first presented. A review of past and present projects is made in the second part, highlighting the fundamental role of dead-time management. The last part is dedicated to the description of two digital systems developed at LNE-LNHB. The first one has been designed for the instrumentation of a NaI(Tl) well-type crystal set-up, while the second one is used for the management of three photomultipliers in the framework of the TDCR method and as a part of the development of a digital platform for coincidence counting. (authors)

  2. Knee point search using cascading top-k sorting with minimized time complexity.

    Science.gov (United States)

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  3. Real-time minimal-bit-error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L.-N.

    1974-01-01

    A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.

  4. Real-time minimal bit error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  5. Minimal hepatic encephalopathy characterized by parallel use of the continuous reaction time and portosystemic encephalopathy tests

    DEFF Research Database (Denmark)

    Lauridsen, M M; Schaffalitzky de Muckadell, O B; Vilstrup, H

    2015-01-01

    Minimal hepatic encephalopathy (MHE) is a frequent complication to liver cirrhosis that causes poor quality of life, a great burden to caregivers, and can be treated. For diagnosis and grading the international guidelines recommend the use of psychometric tests of different modalities (computer...... based vs. paper and pencil). To compare results of the Continuous Reaction time (CRT) and the Portosystemic Encephalopathy (PSE) tests in a large unselected cohort of cirrhosis patients without clinically detectable brain impairment and to clinically characterize the patients according to their test...

  6. Real-time stereo generation for surgical vision during minimal invasive robotic surgery

    Science.gov (United States)

    Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod

    2016-03-01

    This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.

  7. Dynamics of symmetry breaking during quantum real-time evolution in a minimal model system.

    Science.gov (United States)

    Heyl, Markus; Vojta, Matthias

    2014-10-31

    One necessary criterion for the thermalization of a nonequilibrium quantum many-particle system is ergodicity. It is, however, not sufficient in cases where the asymptotic long-time state lies in a symmetry-broken phase but the initial state of nonequilibrium time evolution is fully symmetric with respect to this symmetry. In equilibrium, one particular symmetry-broken state is chosen as a result of an infinitesimal symmetry-breaking perturbation. From a dynamical point of view the question is: Can such an infinitesimal perturbation be sufficient for the system to establish a nonvanishing order during quantum real-time evolution? We study this question analytically for a minimal model system that can be associated with symmetry breaking, the ferromagnetic Kondo model. We show that after a quantum quench from a completely symmetric state the system is able to break its symmetry dynamically and discuss how these features can be observed experimentally.

  8. Batch Scheduling for Hybrid Assembly Differentiation Flow Shop to Minimize Total Actual Flow Time

    Science.gov (United States)

    Maulidya, R.; Suprayogi; Wangsaputra, R.; Halim, A. H.

    2018-03-01

    A hybrid assembly differentiation flow shop is a three-stage flow shop consisting of Machining, Assembly and Differentiation Stages and producing different types of products. In the machining stage, parts are processed in batches on different (unrelated) machines. In the assembly stage, each part of the different parts is assembled into an assembly product. Finally, the assembled products will further be processed into different types of final products in the differentiation stage. In this paper, we develop a batch scheduling model for a hybrid assembly differentiation flow shop to minimize the total actual flow time defined as the total times part spent in the shop floor from the arrival times until its due date. We also proposed a heuristic algorithm for solving the problems. The proposed algorithm is tested using a set of hypothetic data. The solution shows that the algorithm can solve the problems effectively.

  9. Minimizing manual image segmentation turn-around time for neuronal reconstruction by embracing uncertainty.

    Directory of Open Access Journals (Sweden)

    Stephen M Plaza

    Full Text Available The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1 a probabilistic measure that evaluates segmentation without ground truth and 2 a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality.

  10. A Heuristic Scheduling Algorithm for Minimizing Makespan and Idle Time in a Nagare Cell

    Directory of Open Access Journals (Sweden)

    M. Muthukumaran

    2012-01-01

    Full Text Available Adopting a focused factory is a powerful approach for today manufacturing enterprise. This paper introduces the basic manufacturing concept for a struggling manufacturer with limited conventional resources, providing an alternative solution to cell scheduling by implementing the technique of Nagare cell. Nagare cell is a Japanese concept with more objectives than cellular manufacturing system. It is a combination of manual and semiautomatic machine layout as cells, which gives maximum output flexibility for all kind of low-to-medium- and medium-to-high- volume productions. The solution adopted is to create a dedicated group of conventional machines, all but one of which are already available on the shop floor. This paper focuses on the development of heuristic scheduling algorithm in step-by-step method. The algorithm states that the summation of processing time of all products on each machine is calculated first and then the sum of processing time is sorted by the shortest processing time rule to get the assignment schedule. Based on the assignment schedule Nagare cell layout is arranged for processing the product. In addition, this algorithm provides steps to determine the product ready time, machine idle time, and product idle time. And also the Gantt chart, the experimental analysis, and the comparative results are illustrated with five (1×8 to 5×8 scheduling problems. Finally, the objective of minimizing makespan and idle time with greater customer satisfaction is studied through.

  11. Approximate k-NN delta test minimization method using genetic algorithms: Application to time series

    CERN Document Server

    Mateo, F; Gadea, Rafael; Sovilj, Dusan

    2010-01-01

    In many real world problems, the existence of irrelevant input variables (features) hinders the predictive quality of the models used to estimate the output variables. In particular, time series prediction often involves building large regressors of artificial variables that can contain irrelevant or misleading information. Many techniques have arisen to confront the problem of accurate variable selection, including both local and global search strategies. This paper presents a method based on genetic algorithms that intends to find a global optimum set of input variables that minimize the Delta Test criterion. The execution speed has been enhanced by substituting the exact nearest neighbor computation by its approximate version. The problems of scaling and projection of variables have been addressed. The developed method works in conjunction with MATLAB's Genetic Algorithm and Direct Search Toolbox. The goodness of the proposed methodology has been evaluated on several popular time series examples, and also ...

  12. Live and Dead Nodes

    DEFF Research Database (Denmark)

    Jørgensen, Sune Lehman; Jackson, A. D.

    2005-01-01

    In this paper, we explore the consequences of a distinction between `live' and `dead' network nodes; `live' nodes are able to acquire new links whereas `dead' nodes are static. We develop an analytically soluble growing network model incorporating this distinction and show that it can provide...

  13. Cost minimization analysis of different growth hormone pen devices based on time-and-motion simulations

    Directory of Open Access Journals (Sweden)

    Kim Jaewhan

    2010-04-01

    Full Text Available Abstract Background Numerous pen devices are available to administer recombinant Human Growth Hormone (rhGH, and both patients and health plans have varying issues to consider when selecting a particular product and device for daily use. Therefore, the present study utilized multi-dimensional product analysis to assess potential time involvement, required weekly administration steps, and utilization costs relative to daily rhGH administration. Methods Study objectives were to conduct 1 Time-and-Motion (TM simulations in a randomized block design that allowed time and steps comparisons related to rhGH preparation, administration and storage, and 2 a Cost Minimization Analysis (CMA relative to opportunity and supply costs. Nurses naïve to rhGH administration and devices were recruited to evaluate four rhGH pen devices (2 in liquid form, 2 requiring reconstitution via TM simulations. Five videotaped and timed trials for each product were evaluated based on: 1 Learning (initial use instructions, 2 Preparation (arrange device for use, 3 Administration (actual simulation manikin injection, and 4 Storage (maintain product viability between doses, in addition to assessment of steps required for weekly use. The CMA applied micro-costing techniques related to opportunity costs for caregivers (categorized as wages, non-drug medical supplies, and drug product costs. Results Norditropin® NordiFlex and Norditropin® NordiPen (NNF and NNP, Novo Nordisk, Inc., Bagsværd, Denmark took less weekly Total Time (p ® Pen (GTP, Pfizer, Inc, New York, New York or HumatroPen® (HTP, Eli Lilly and Company, Indianapolis, Indiana. Time savings were directly related to differences in new package Preparation times (NNF (1.35 minutes, NNP (2.48 minutes GTP (4.11 minutes, HTP (8.64 minutes, p Conclusions Time-and-motion simulation data used to support a micro-cost analysis demonstrated that the pen device with the greater time demand has highest net costs.

  14. MINIMIZING THE PREPARATION TIME OF A TUBES MACHINE: EXACT SOLUTION AND HEURISTICS

    Directory of Open Access Journals (Sweden)

    Robinson S.V. Hoto

    Full Text Available ABSTRACT In this paper we optimize the preparation time of a tubes machine. Tubes are hard tubes made by gluing strips of paper that are packed in paper reels, and some of them may be reused between the production of one and another tube. We present a mathematical model for the minimization of changing reels and movements and also implementations for the heuristics Nearest Neighbor, an improvement of a nearest neighbor (Best Nearest Neighbor, refinements of the Best Nearest Neighbor heuristic and a heuristic of permutation called Best Configuration using the IDE (integrated development environment WxDev C++. The results obtained by simulations improve the one used by the company.

  15. Exact and Heuristic Solutions to Minimize Total Waiting Time in the Blood Products Distribution Problem

    Directory of Open Access Journals (Sweden)

    Amir Salehipour

    2012-01-01

    Full Text Available This paper presents a novel application of operations research to support decision making in blood distribution management. The rapid and dynamic increasing demand, criticality of the product, storage, handling, and distribution requirements, and the different geographical locations of hospitals and medical centers have made blood distribution a complex and important problem. In this study, a real blood distribution problem containing 24 hospitals was tackled by the authors, and an exact approach was presented. The objective of the problem is to distribute blood and its products among hospitals and medical centers such that the total waiting time of those requiring the product is minimized. Following the exact solution, a hybrid heuristic algorithm is proposed. Computational experiments showed the optimal solutions could be obtained for medium size instances, while for larger instances the proposed hybrid heuristic is very competitive.

  16. Minimizing patient waiting time in emergency department of public hospital using simulation optimization approach

    Science.gov (United States)

    Ibrahim, Ireen Munira; Liong, Choong-Yeun; Bakar, Sakhinah Abu; Ahmad, Norazura; Najmuddin, Ahmad Farid

    2017-04-01

    Emergency department (ED) is the main unit of a hospital that provides emergency treatment. Operating 24 hours a day with limited number of resources invites more problems to the current chaotic situation in some hospitals in Malaysia. Delays in getting treatments that caused patients to wait for a long period of time are among the frequent complaints against government hospitals. Therefore, the ED management needs a model that can be used to examine and understand resource capacity which can assist the hospital managers to reduce patients waiting time. Simulation model was developed based on 24 hours data collection. The model developed using Arena simulation replicates the actual ED's operations of a public hospital in Selangor, Malaysia. The OptQuest optimization in Arena is used to find the possible combinations of a number of resources that can minimize patients waiting time while increasing the number of patients served. The simulation model was modified for improvement based on results from OptQuest. The improvement model significantly improves ED's efficiency with an average of 32% reduction in average patients waiting times and 25% increase in the total number of patients served.

  17. Measuring Sulfur Isotope Ratios from Solid Samples with the Sample Analysis at Mars Instrument and the Effects of Dead Time Corrections

    Science.gov (United States)

    Franz, H. B.; Mahaffy, P. R.; Kasprzak, W.; Lyness, E.; Raaen, E.

    2011-01-01

    The Sample Analysis at Mars (SAM) instrument suite comprises the largest science payload on the Mars Science Laboratory (MSL) "Curiosity" rover. SAM will perform chemical and isotopic analysis of volatile compounds from atmospheric and solid samples to address questions pertaining to habitability and geochemical processes on Mars. Sulfur is a key element of interest in this regard, as sulfur compounds have been detected on the Martian surface by both in situ and remote sensing techniques. Their chemical and isotopic composition can belp constrain environmental conditions and mechanisms at the time of formation. A previous study examined the capability of the SAM quadrupole mass spectrometer (QMS) to determine sulfur isotope ratios of SO2 gas from a statistical perspective. Here we discuss the development of a method for determining sulfur isotope ratios with the QMS by sampling SO2 generated from heating of solid sulfate samples in SAM's pyrolysis oven. This analysis, which was performed with the SAM breadboard system, also required development of a novel treatment of the QMS dead time to accommodate the characteristics of an aging detector.

  18. Limitations of Feedback, Feedforward and IMC Controller for a First Order Non-Linear Process with Dead Time

    Directory of Open Access Journals (Sweden)

    Maruthai Suresh

    2010-10-01

    Full Text Available A nonlinear process, the heat exchanger whose parameters vary with respect to the process variable, is considered. The time constant and gain of the chosen process vary as a function of temperature. The limitations of the conventional feedback controller tuned using Ziegler-Nichols settings for the chosen process are brought out. The servo and regulatory responses through simulation and experimentation for various magnitudes of set-point changes and load changes at various operating points with the controller tuned only at a chosen nominal operating point are obtained and analyzed. Regulatory responses for output load changes are studied. The efficiency of feedforward controller and the effects of modeling error have been brought out. An IMC based system is presented to understand clearly how variations of system parameters affect the performance of the controller. The present work illustrates the effectiveness of Feedforward and IMC controller.

  19. Quantum Dynamics with Short-Time Trajectories and Minimal Adaptive Basis Sets.

    Science.gov (United States)

    Saller, Maximilian A C; Habershon, Scott

    2017-07-11

    Methods for solving the time-dependent Schrödinger equation via basis set expansion of the wave function can generally be categorized as having either static (time-independent) or dynamic (time-dependent) basis functions. We have recently introduced an alternative simulation approach which represents a middle road between these two extremes, employing dynamic (classical-like) trajectories to create a static basis set of Gaussian wavepackets in regions of phase-space relevant to future propagation of the wave function [J. Chem. Theory Comput., 11, 8 (2015)]. Here, we propose and test a modification of our methodology which aims to reduce the size of basis sets generated in our original scheme. In particular, we employ short-time classical trajectories to continuously generate new basis functions for short-time quantum propagation of the wave function; to avoid the continued growth of the basis set describing the time-dependent wave function, we employ Matching Pursuit to periodically minimize the number of basis functions required to accurately describe the wave function. Overall, this approach generates a basis set which is adapted to evolution of the wave function while also being as small as possible. In applications to challenging benchmark problems, namely a 4-dimensional model of photoexcited pyrazine and three different double-well tunnelling problems, we find that our new scheme enables accurate wave function propagation with basis sets which are around an order-of-magnitude smaller than our original trajectory-guided basis set methodology, highlighting the benefits of adaptive strategies for wave function propagation.

  20. Real-time geometry-aware augmented reality in minimally invasive surgery.

    Science.gov (United States)

    Chen, Long; Tang, Wen; John, Nigel W

    2017-10-01

    The potential of augmented reality (AR) technology to assist minimally invasive surgery (MIS) lies in its computational performance and accuracy in dealing with challenging MIS scenes. Even with the latest hardware and software technologies, achieving both real-time and accurate augmented information overlay in MIS is still a formidable task. In this Letter, the authors present a novel real-time AR framework for MIS that achieves interactive geometric aware AR in endoscopic surgery with stereo views. The authors' framework tracks the movement of the endoscopic camera and simultaneously reconstructs a dense geometric mesh of the MIS scene. The movement of the camera is predicted by minimising the re-projection error to achieve a fast tracking performance, while the three-dimensional mesh is incrementally built by a dense zero mean normalised cross-correlation stereo-matching method to improve the accuracy of the surface reconstruction. The proposed system does not require any prior template or pre-operative scan and can infer the geometric information intra-operatively in real time. With the geometric information available, the proposed AR framework is able to interactively add annotations, localisation of tumours and vessels, and measurement labelling with greater precision and accuracy compared with the state-of-the-art approaches.

  1. On the Minimization of Fluctuations in the Response Times of Autoregulatory Gene Networks

    Science.gov (United States)

    Murugan, Rajamanickam; Kreiman, Gabriel

    2011-01-01

    The temporal dynamics of the concentrations of several proteins are tightly regulated, particularly for critical nodes in biological networks such as transcription factors. An important mechanism to control transcription factor levels is through autoregulatory feedback loops where the protein can bind its own promoter. Here we use theoretical tools and computational simulations to further our understanding of transcription-factor autoregulatory loops. We show that the stochastic dynamics of feedback and mRNA synthesis can significantly influence the speed of response of autoregulatory genetic networks toward external stimuli. The fluctuations in the response-times associated with the accumulation of the transcription factor in the presence of negative or positive autoregulation can be minimized by confining the ratio of mRNA/protein lifetimes within 1:10. This predicted range of mRNA/protein lifetime agrees with ranges observed empirically in prokaryotes and eukaryotes. The theory can quantitatively and systematically account for the influence of regulatory element binding and unbinding dynamics on the transcription-factor concentration rise-times. The simulation results are robust against changes in several system parameters of the gene expression machinery. PMID:21943410

  2. Time Domain Equalizer Design Using Bit Error Rate Minimization for UWB Systems

    Directory of Open Access Journals (Sweden)

    Syed Imtiaz Husain

    2009-01-01

    Full Text Available Ultra-wideband (UWB communication systems occupy huge bandwidths with very low power spectral densities. This feature makes the UWB channels highly rich in resolvable multipaths. To exploit the temporal diversity, the receiver is commonly implemented through a Rake. The aim to capture enough signal energy to maintain an acceptable output signal-to-noise ratio (SNR dictates a very complicated Rake structure with a large number of fingers. Channel shortening or time domain equalizer (TEQ can simplify the Rake receiver design by reducing the number of significant taps in the effective channel. In this paper, we first derive the bit error rate (BER of a multiuser and multipath UWB system in the presence of a TEQ at the receiver front end. This BER is then written in a form suitable for traditional optimization. We then present a TEQ design which minimizes the BER of the system to perform efficient channel shortening. The performance of the proposed algorithm is compared with some generic TEQ designs and other Rake structures in UWB channels. It is shown that the proposed algorithm maintains a lower BER along with efficiently shortening the channel.

  3. A Game Theoretic Approach to Minimize the Completion Time of Network Coded Cooperative Data Exchange

    KAUST Repository

    Douik, Ahmed S.

    2014-05-11

    In this paper, we introduce a game theoretic framework for studying the problem of minimizing the completion time of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self-interested players in a non-cooperative potential game. The utility function is designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Pareto optimal solution. Through extensive simulations, our approach is compared to the best performance that could be found in the conventional point-to-multipoint (PMP) recovery process. Numerical results show that our formulation largely outperforms the conventional PMP scheme in most practical situations and achieves a lower delay.

  4. Timing of nicotine lozenge administration to minimize trigger induced craving and withdrawal symptoms.

    Science.gov (United States)

    Kotlyar, Michael; Lindgren, Bruce R; Vuchetich, John P; Le, Chap; Mills, Anne M; Amiot, Elizabeth; Hatsukami, Dorothy K

    2017-08-01

    Smokers are often advised to use nicotine lozenge when craving or withdrawal symptoms occur. This may be too late to prevent lapses. This study assessed if nicotine lozenge use prior to a common smoking trigger can minimize trigger induced increases in craving and withdrawal symptoms. Eighty-four smokers completed two laboratory sessions in random order. At one session, nicotine lozenge was given immediately after a stressor (to approximate current recommended use - i.e., after craving and withdrawal symptoms occur); at the other session subjects were randomized to receive nicotine lozenge at time points ranging from immediately to 30min prior to the stressor. Withdrawal symptoms and urge to smoke were measured using the Minnesota Nicotine Withdrawal Scale and the Questionnaire of Smoking Urges (QSU). Relative to receiving lozenge after the stressor, a smaller increase in pre-stressor to post-stressor withdrawal symptom scores occurred when lozenge was used immediately (p=0.03) and 10min prior (p=0.044) to the stressor. Results were similar for factors 1 and 2 of the QSU when lozenge was used immediately prior to the stressor (pnicotine lozenge prior to a smoking trigger can decrease trigger induced craving and withdrawal symptoms. Future studies are needed to determine if such use would increase cessation rates. Clinicaltrials.gov # NCT01522963. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Minimal cardiac transit-times in the diagnosis of heart disease

    International Nuclear Information System (INIS)

    Freundlieb, C.; Vyska, K.; Hoeck, A.; Schicha, H.; Becker, V.; Feinendegen, L.E.

    1976-01-01

    Using Indium-113m and the Gamma Retina V (Fucks-Knipping Camera), the minimal cardiac transit times (MTTs) were measured radiocardiographically from the right auricle to the aortic root. This analysis served to determine the relation between stroke volume and the segment volume of the part of circulation between the right auricle and the aortic root. In 39 patients with myocardial insufficiency of different clinical degree the effectiveness of digitalization was, up to a period of 5 years, measured by means of the volume relation mentioned above. The following conclusions can be drawn from the results: digitalization of patients with myocardial insufficiency leads to an improvement of the impaired relation of central volumes. In patients with diminished cardiac reserve the improvement is drastic and often results in a nearly complete normalization. The data remain constant during therapy even for an observation period of 5 years. Digitalization of patients with congestive heart failure only leads to a partial improvement. In contrast to patients with diminished cardiac reserve this effect is temporary. The different behaviour of the relation between stroke volume and segment volume in patients with diminished cardiac reserve and congestive heart failure under prolonged administration of digitalis points to the necessity of treatment with digitalis in the early stage of myocardial disease. (orig.) [de

  6. A Game Theoretic Approach to Minimize the Completion Time of Network Coded Cooperative Data Exchange

    KAUST Repository

    Douik, Ahmed S.; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim; Sorour, Sameh; Tembine, Hamidou

    2014-01-01

    In this paper, we introduce a game theoretic framework for studying the problem of minimizing the completion time of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self-interested players in a non-cooperative potential game. The utility function is designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Pareto optimal solution. Through extensive simulations, our approach is compared to the best performance that could be found in the conventional point-to-multipoint (PMP) recovery process. Numerical results show that our formulation largely outperforms the conventional PMP scheme in most practical situations and achieves a lower delay.

  7. Laparoscopic vs. open approach for colorectal cancer: evolution over time of minimal invasive surgery.

    Science.gov (United States)

    Biondi, Antonio; Grosso, Giuseppe; Mistretta, Antonio; Marventano, Stefano; Toscano, Chiara; Drago, Filippo; Gangi, Santi; Basile, Francesco

    2013-01-01

    In the late '80s the successes of the laparoscopic surgery for gallbladder disease laid the foundations on the modern use of this surgical technique in a variety of diseases. In the last 20 years, laparoscopic colorectal surgery had become a popular treatment option for colorectal cancer patients. Many studies emphasized on the benefits stating the significant advantages of the laparoscopic approach compared with the open surgery of reduced blood loss, early return of intestinal motility, lower overall morbidity, and shorter duration of hospital stay, leading to a general agreement on laparoscopic surgery as an alternative to conventional open surgery for colon cancer. The reduced hospital stay may also decrease the cost of the laparoscopic surgery for colorectal cancer, despite th higher operative spending compared with open surgery. The average reduction in total direct costs is difficult to define due to the increasing cost over time, making challenging the comparisons between studies conducted during a time range of more than 10 years. However, despite the theoretical advantages of laparoscopic surgery, it is still not considered the standard treatment for colorectal cancer patients due to technical limitations or the characteristics of the patients that may affect short and long term outcomes. The laparoscopic approach to colectomy is slowly gaining acceptance for the management of colorectal pathology. Laparoscopic surgery for colon cancer demonstrates better short-term outcome, oncologic safety, and equivalent long-term outcome of open surgery. For rectal cancer, laparoscopic technique can be more complex depending on the tumor location. The advantages of minimally invasive surgery may translate better care quality for oncological patients and lead to increased cost saving through the introduction of active enhanced recovery programs which are likely cost-effective from the perspective of the hospital health-care providers.

  8. Practicing on Newly Dead

    Directory of Open Access Journals (Sweden)

    Jewel Abraham

    2015-07-01

    Full Text Available A newly dead cadaver simulation is practiced on the physical remains of the dead before the onset of rigor mortis. This technique has potential benefits for providing real-life in-situ experience for novice providers in health care practices. Evolving ethical views in health care brings into question some of the ethical aspects associated with newly dead cadaver simulation in terms of justification for practice, autonomy, consent, and the need of disclosure. A clear statement of policies and procedures on newly dead cadaver simulation has yet to be implemented. Although there are benefits and disadvantages to an in-situ cadaver simulation, such practices should not be carried out in secrecy as there is no compelling evidence that suggests such training as imperative. Secrecy in these practices is a violation of honor code of nursing ethics. As health care providers, practitioners are obliged to be ethically honest and trustworthy to their patients. The author explores the ethical aspects of using newly dead cadaver simulation in training novice nursing providers to gain competency in various lifesaving skills, which otherwise cannot be practiced on a living individual. The author explores multiple views on cadaver simulation in relation to ethical theories and practices such as consent and disclosure to family.

  9. Effects of short immersion time and cooling rates of copperizing process to the evolution of microstructures and copper behavior in the dead mild steel

    Science.gov (United States)

    Jatimurti, Wikan; Sutarsis, Cunika, Aprida Ulya

    2017-01-01

    In a dead mild steel with maximum carbon content of 0.15%, carbon does not contribute much to its strength. By adding copper as an alloying element, a balance between strength and ductility could be obtained through grain refining, solid solution, or Cu precipitation. This research aimed to analyse the changes in microstructures and copper behaviour on AISI 1006, including the phases formed, composition, and Cu dispersion. The addition of cooper was done by immersing steel into molten copper or so we called, copperizing using the principles of diffusion. Specimens were cut with 6 × 3 × 0.3 cm measurement then preheated to 900°C and melting the copper at 1100°C. Subsequently, the immersion of the specimens into molten copper varied to 5 and 7 minutes, and also varying the cooling rate to annealing, normalizing, and quenching. A series of test being conduct were optical microscope test, scanning electron microscopy with energy dispersive X-ray spectroscopy (SEM-EDX), optical emission spectroscopy (OES), and X-ray diffraction (XRD). The results showed that the longer the immersion time and slower cooling rate, the more Cu diffused causing smaller grain size with the highest Cu diffused recorded was 0.277% in the copperized AISI 1006 steel with 7 minutes of immersion and was annealed. The grain size reduced to 23041.5404 µm2. The annealed specimens show ferrite phase, the normalized ones show polygonal ferrite phase, while the quenched ones show granular bainite phase. The phase formed is single phase Cu. In addition, the normalized and quenched specimens show that Cu dissolved in Fe crystal forming solid solution.

  10. A novel high performance stopped-flow apparatus equipped with a special constructed mixing chamber containing a plunger under inert condition with a very short dead-time to investigate very rapid reactions

    Directory of Open Access Journals (Sweden)

    Sayyed Mostafa Habibi Khorassani

    2015-11-01

    Full Text Available The present work set out to establish a novel stopped-flow instrument equipped with a special constructed mixing chamber containing a plunger to enable a kinetic study of the very rapid reactions under a dry inert atmosphere glove bag, in particular, for the reactions are sensitive to moisture or air. A stopped-flow spectrophotometer is essentially a conventional spectrophotometer with the addition of a system for rapid mixing of solutions. The purpose of this work is to describe the fabrication and evaluation of specially constructed and in-expensive stopped-flow system. The evaluation includes determination of the dead-time, relative mixing efficiency, and the measurement of known rate constants. Herein, a dead-time of about 3.4 ms was determined in the final modified construction of the stopped-flow apparatus in order to investigate the rapid initial during which some form of reaction intermediate is presented to be formed.

  11. An applied optimization based method for line planning to minimize travel time

    DEFF Research Database (Denmark)

    Bull, Simon Henry; Rezanova, Natalia Jurjevna; Lusby, Richard Martin

    The line planning problem in rail is to select a number of lines froma potential pool which provides sufficient passenger capacity and meetsoperational requirements, with some objective measure of solution linequality. We model the problem of minimizing the average passenger systemtime, including...

  12. Dead-ice environments

    DEFF Research Database (Denmark)

    Krüger, Johannes; Kjær, Kurt H.; Schomacker, Anders

    2010-01-01

    glacier environment. The scientific challenges are to answer the key questions. What are the conditions for dead-ice formation? From which sources does the sediment cover originate? Which melting and reworking processes act in the ice-cored moraines? What is the rate of de-icing in the ice-cored moraines...

  13. Resurrecting deadly carrots

    DEFF Research Database (Denmark)

    Weitzel, Corinna; Rønsted, Nina; Spalik, Krysztof

    2014-01-01

    Thapsia L. circumscribes a small genus of herbaceous perennials in the taxonomically difficult family Apiaceae. Thapsia occurs around the Mediterranean, extending from the Atlantic coasts of Portugal and Morocco to Crete and other Greek Islands in the East. Thapsia is commonly known as deadly...

  14. On the nature of rainfall in dry climate: Space-time patterns of convective rain cells over the Dead Sea region and their relations with synoptic state and flash flood generation

    Science.gov (United States)

    Belachsen, Idit; Marra, Francesco; Peleg, Nadav; Morin, Efrat

    2017-04-01

    -south negative gradient of mean annual rainfall in the study region was found to be negatively correlated with rain cells intensity and positively correlated with rain cells area. Additional analysis was done for convective rain cells over two nearby catchments located in the central part of the study region, by ascribing some of the rain events to observed flash-flood events. It was found that rain events associated with flash-floods have higher maximal rain cell intensity and lower minimal cell speed than rain events that did not lead to a flash-flood in the watersheds. This information contributes to our understanding of rain patterns over the dry area of the Dead Sea and their connection to flash-floods. The statistical distributions of rain cells properties can be used for high space-time resolution stochastic simulations of rain storms that can serve as an input to hydrological models.

  15. Critical flicker frequency and continuous reaction times for the diagnosis of minimal hepatic encephalopathy

    DEFF Research Database (Denmark)

    Lauridsen, Mette Enok Munk; Jepsen, Peter; Vilstrup, Hendrik

    2011-01-01

    Abstract Minimal hepatic encephalopathy (MHE) is intermittently present in up to 2/3 of patients with chronic liver disease. It impairs their daily living and can be treated. However, there is no consensus on diagnostic criteria except that psychometric methods are required. We compared two easy...... appropriately to a sensory stimulus. The choice of test depends on the information needed in the clinical and scientific care and study of the patients....

  16. Minimizing driving times and greenhouse gas emissions in timber transport with a near-exact solution approach

    DEFF Research Database (Denmark)

    Oberscheider, Marco; Zazgornik, Jan; Henriksen, Christian Bugge

    2013-01-01

    Efficient transport of timber for supplying industrial conversion and biomass power plants is a crucial factor for competitiveness in the forest industry. Throughout the recent years minimizing driving times has been the main focus of optimizations in this field. In addition to this aim the objec...

  17. Departure fuel loads in time-minimizing migrating birds can be explained by the energy costs of being heavy

    NARCIS (Netherlands)

    Klaassen, M.R.J.; Lindstrom, A.

    1996-01-01

    Lindstrom & Alerstam (1992 Am. Nat. 140, 477-491) presented a model that predicts optimal departure fuel loads as a function of the rate of fuel deposition in time-minimizing migrants. The basis of the model is that the coverable distance per unit of fuel deposited, diminishes with increasing fuel

  18. Scheduling with Learning Effects and/or Time-Dependent Processing Times to Minimize the Weighted Number of Tardy Jobs on a Single Machine

    Directory of Open Access Journals (Sweden)

    Jianbo Qian

    2013-01-01

    Full Text Available We consider single machine scheduling problems with learning/deterioration effects and time-dependent processing times, with due date assignment consideration, and our objective is to minimize the weighted number of tardy jobs. By reducing all versions of the problem to an assignment problem, we solve them in O(n4 time. For some important special cases, the time complexity can be improved to be O(n2 using dynamic programming techniques.

  19. New real-time MR image-guided surgical robotic system for minimally invasive precision surgery

    Energy Technology Data Exchange (ETDEWEB)

    Hashizume, M.; Yasunaga, T.; Konishi, K. [Kyushu University, Department of Advanced Medical Initiatives, Faculty of Medical Sciences, Fukuoka (Japan); Tanoue, K.; Ieiri, S. [Kyushu University Hospital, Department of Advanced Medicine and Innovative Technology, Fukuoka (Japan); Kishi, K. [Hitachi Ltd, Mechanical Engineering Research Laboratory, Hitachinaka-Shi, Ibaraki (Japan); Nakamoto, H. [Hitachi Medical Corporation, Application Development Office, Kashiwa-Shi, Chiba (Japan); Ikeda, D. [Mizuho Ikakogyo Co. Ltd, Tokyo (Japan); Sakuma, I. [The University of Tokyo, Graduate School of Engineering, Bunkyo-Ku, Tokyo (Japan); Fujie, M. [Waseda University, Graduate School of Science and Engineering, Shinjuku-Ku, Tokyo (Japan); Dohi, T. [The University of Tokyo, Graduate School of Information Science and Technology, Bunkyo-Ku, Tokyo (Japan)

    2008-04-15

    To investigate the usefulness of a newly developed magnetic resonance (MR) image-guided surgical robotic system for minimally invasive laparoscopic surgery. The system consists of MR image guidance [interactive scan control (ISC) imaging, three-dimensional (3-D) navigation, and preoperative planning], an MR-compatible operating table, and an MR-compatible master-slave surgical manipulator that can enter the MR gantry. Using this system, we performed in vivo experiments with MR image-guided laparoscopic puncture on three pigs. We used a mimic tumor made of agarose gel and with a diameter of approximately 2 cm. All procedures were successfully performed. The operator only advanced the probe along the guidance device of the manipulator, which was adjusted on the basis of the preoperative plan, and punctured the target while maintaining the operative field using robotic forceps. The position of the probe was monitored continuously with 3-D navigation and 2-D ISC images, as well as the MR-compatible laparoscope. The ISC image was updated every 4 s; no artifact was detected. A newly developed MR image-guided surgical robotic system is feasible for an operator to perform safe and precise minimally invasive procedures. (orig.)

  20. New real-time MR image-guided surgical robotic system for minimally invasive precision surgery

    International Nuclear Information System (INIS)

    Hashizume, M.; Yasunaga, T.; Konishi, K.; Tanoue, K.; Ieiri, S.; Kishi, K.; Nakamoto, H.; Ikeda, D.; Sakuma, I.; Fujie, M.; Dohi, T.

    2008-01-01

    To investigate the usefulness of a newly developed magnetic resonance (MR) image-guided surgical robotic system for minimally invasive laparoscopic surgery. The system consists of MR image guidance [interactive scan control (ISC) imaging, three-dimensional (3-D) navigation, and preoperative planning], an MR-compatible operating table, and an MR-compatible master-slave surgical manipulator that can enter the MR gantry. Using this system, we performed in vivo experiments with MR image-guided laparoscopic puncture on three pigs. We used a mimic tumor made of agarose gel and with a diameter of approximately 2 cm. All procedures were successfully performed. The operator only advanced the probe along the guidance device of the manipulator, which was adjusted on the basis of the preoperative plan, and punctured the target while maintaining the operative field using robotic forceps. The position of the probe was monitored continuously with 3-D navigation and 2-D ISC images, as well as the MR-compatible laparoscope. The ISC image was updated every 4 s; no artifact was detected. A newly developed MR image-guided surgical robotic system is feasible for an operator to perform safe and precise minimally invasive procedures. (orig.)

  1. MINIMALLY INVASIVE SURGERY FOR GASTRIC CANCER: TIME TO CHANGE THE PARADIGM.

    Science.gov (United States)

    Barchi, Leandro Cardoso; Jacob, Carlos Eduardos; Bresciani, Cláudio José Caldas; Yagi, Osmar Kenji; Mucerino, Donato Roberto; Lopasso, Fábio Pinatel; Mester, Marcelo; Ribeiro-Júnior, Ulysses; Dias, André Roncon; Ramos, Marcus Fernando Kodama Pertille; Cecconello, Ivan; Zilberstein, Bruno

    2016-01-01

    Minimally invasive surgery widely used to treat benign disorders of the digestive system, has become the focus of intense study in recent years in the field of surgical oncology. Since then, the experience with this kind of approach has grown, aiming to provide the same oncological outcomes and survival to conventional surgery. Regarding gastric cancer, surgery is still considered the only curative treatment, considering the extent of resection and lymphadenectomy performed. Conventional surgery remains the main modality performed worldwide. Notwithstanding, the role of the minimally invasive access is yet to be clarified. To evaluate and summarize the current status of minimally invasive resection of gastric cancer. A literature review was performed using Medline/PubMed, Cochrane Library and SciELO with the following headings: gastric cancer, minimally invasive surgery, robotic gastrectomy, laparoscopic gastrectomy, stomach cancer. The language used for the research was English. 28 articles were considered, including randomized controlled trials, meta-analyzes, prospective and retrospective cohort studies. Minimally invasive gastrectomy may be considered as a technical option in the treatment of early gastric cancer. As for advanced cancer, recent studies have demonstrated the safety and feasibility of the laparoscopic approach. Robotic gastrectomy will probably improve outcomes obtained with laparoscopy. However, high cost is still a barrier to its use on a large scale. A cirurgia minimamente invasiva amplamente usada para tratar doenças benignas do aparelho digestivo, tornou-se o foco de intenso estudo nos últimos anos no campo da oncologia cirúrgica. Desde então, a experiência com este tipo de abordagem tem crescido, com o objetivo de fornecer os mesmos resultados oncológicos e sobrevivência à cirurgia convencional. Em relação ao câncer gástrico, o tratamento cirúrgico ainda é considerado o único tratamento curativo, considerando a extensão da

  2. Economic Dispatch for Operating Cost Minimization under Real Time Pricing in Droop Controlled DC Microgrid

    DEFF Research Database (Denmark)

    Li, Chendan; Federico, de Bosio; Chen, Fang

    2017-01-01

    In this paper, an economic dispatch problem for total operation cost minimization in DC microgrids is formulated. An operating cost is associated with each generator in the microgrid, including the utility grid, combining the cost-efficiency of the system with demand response requirements...... achieving higher control accuracy and faster response. The optimization problem is solved in a heuristic method. In order to test the proposed algorithm, a six-bus droop-controlled DC microgrid is used in the case studies. Simulation results show that under variable renewable energy generation, load...... of the utility. The power flow model is included in the optimization problem, thus the transmission losses can be considered for generation dispatch. By considering the primary (local) control of the grid-forming converters of a microgrid, optimal parameters can be directly applied to this control level, thus...

  3. Ten scenarios from early radiation to late time acceleration with a minimally coupled dark energy

    Energy Technology Data Exchange (ETDEWEB)

    Fay, Stéphane, E-mail: steph.fay@gmail.com [Palais de la Découverte, Astronomy Department, Avenue Franklin Roosevelt, 75008 Paris (France)

    2013-09-01

    We consider General Relativity with matter, radiation and a minimally coupled dark energy defined by an equation of state w. Using dynamical system method, we find the equilibrium points of such a theory assuming an expanding Universe and a positive dark energy density. Two of these points correspond to classical radiation and matter dominated epochs for the Universe. For the other points, dark energy mimics matter, radiation or accelerates Universe expansion. We then look for possible sequences of epochs describing a Universe starting with some radiation dominated epoch(s) (mimicked or not by dark energy), then matter dominated epoch(s) (mimicked or not by dark energy) and ending with an accelerated expansion. We find ten sequences able to follow this Universe history without singular behaviour of w at some saddle points. Most of them are new in dark energy literature. To get more than these ten sequences, w has to be singular at some specific saddle equilibrium points. This is an unusual mathematical property of the equation of state in dark energy literature, whose physical consequences tend to be discarded by observations. This thus distinguishes the ten above sequences from an infinity of ways to describe Universe expansion.

  4. Ten scenarios from early radiation to late time acceleration with a minimally coupled dark energy

    International Nuclear Information System (INIS)

    Fay, Stéphane

    2013-01-01

    We consider General Relativity with matter, radiation and a minimally coupled dark energy defined by an equation of state w. Using dynamical system method, we find the equilibrium points of such a theory assuming an expanding Universe and a positive dark energy density. Two of these points correspond to classical radiation and matter dominated epochs for the Universe. For the other points, dark energy mimics matter, radiation or accelerates Universe expansion. We then look for possible sequences of epochs describing a Universe starting with some radiation dominated epoch(s) (mimicked or not by dark energy), then matter dominated epoch(s) (mimicked or not by dark energy) and ending with an accelerated expansion. We find ten sequences able to follow this Universe history without singular behaviour of w at some saddle points. Most of them are new in dark energy literature. To get more than these ten sequences, w has to be singular at some specific saddle equilibrium points. This is an unusual mathematical property of the equation of state in dark energy literature, whose physical consequences tend to be discarded by observations. This thus distinguishes the ten above sequences from an infinity of ways to describe Universe expansion

  5. Adjusting patients streaming initiated by a wait time threshold in emergency department for minimizing opportunity cost.

    Science.gov (United States)

    Kim, Byungjoon B J; Delbridge, Theodore R; Kendrick, Dawn B

    2017-07-10

    Purpose Two different systems for streaming patients were considered to improve efficiency measures such as waiting times (WTs) and length of stay (LOS) for a current emergency department (ED). A typical fast track area (FTA) and a fast track with a wait time threshold (FTW) were designed and compared effectiveness measures from the perspective of total opportunity cost of all patients' WTs in the ED. The paper aims to discuss these issues. Design/methodology/approach This retrospective case study used computerized ED patient arrival to discharge time logs (between July 1, 2009 and June 30, 2010) to build computer simulation models for the FTA and fast track with wait time threshold systems. Various wait time thresholds were applied to stream different acuity-level patients. National average wait time for each acuity level was considered as a threshold to stream patients. Findings The fast track with a wait time threshold (FTW) showed a statistically significant shorter total wait time than the current system or a typical FTA system. The patient streaming management would improve the service quality of the ED as well as patients' opportunity costs by reducing the total LOS in the ED. Research limitations/implications The results of this study were based on computer simulation models with some assumptions such as no transfer times between processes, an arrival distribution of patients, and no deviation of flow pattern. Practical implications When the streaming of patient flow can be managed based on the wait time before being seen by a physician, it is possible for patients to see a physician within a tolerable wait time, which would result in less crowded in the ED. Originality/value A new streaming scheme of patients' flow may improve the performance of fast track system.

  6. RubiShort: Reducing scan time in 82Rb heart scans to minimize movements artifacts

    DEFF Research Database (Denmark)

    Madsen, Jeppe; Vraa, Kaspar J.; Harms, Hans

    .013x, R2=0.98; %Reversible: y=1.008x, R2=0.95; TPD: y=1.000x, R2=0.99). Conclusion:, Scan time of myocardial perfusion scans using 82Rb can be reduced from 7 min. to 5 min. without loss of quantitative accuracy. Since patient motion is frequent in the last minutes of the scans, scan time reduction...

  7. The Dead Walk

    Directory of Open Access Journals (Sweden)

    Bill Phillips

    2014-02-01

    Full Text Available Monsters have always enjoyed a significant presence in the human imagination, and religion was instrumental in replacing the physical horror they engendered with that of a moral threat. Zombies, however, are amoral – their motivation purely instinctive and arbitrary, yet they are, perhaps, the most loathed of all contemporary monsters. One explanation for this lies in the theory of the uncanny valley, proposed by robotics engineer Masahiro Mori. According to the theory, we reserve our greatest fears for those things which seem most human, yet are not – such as dead bodies. Such a reaction is most likely a survival mechanism to protect us from danger and disease – a mechanism even more essential when the dead rise up and walk. From their beginnings zombies have reflected western societies’ greatest fears – be they of revolutionary Haitians, women, or communists. In recent years the rise in the popularity of the zombie in films, books and television series reflects our fears for the planet, the economy, and of death itself

  8. Minimizing total weighted tardiness for the single machine scheduling problem with dependent setup time and precedence constraints

    Directory of Open Access Journals (Sweden)

    Hamidreza Haddad

    2012-04-01

    Full Text Available This paper tackles the single machine scheduling problem with dependent setup time and precedence constraints. The primary objective of this paper is minimization of total weighted tardiness. Since the complexity of the resulted problem is NP-hard we use metaheuristics method to solve the resulted model. The proposed model of this paper uses genetic algorithm to solve the problem in reasonable amount of time. Because of high sensitivity of GA to its initial values of parameters, a Taguchi approach is presented to calibrate its parameters. Computational experiments validate the effectiveness and capability of proposed method.

  9. Different-Level Simultaneous Minimization Scheme for Fault Tolerance of Redundant Manipulator Aided with Discrete-Time Recurrent Neural Network.

    Science.gov (United States)

    Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang

    2017-01-01

    By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network.

  10. The Role of Compensation Criteria to Minimize Face-Time Bias and Support Faculty Career Flexibility

    Directory of Open Access Journals (Sweden)

    Lydia Pleotis Howell MD

    2016-02-01

    Full Text Available Work-life balance is important to recruitment and retention of the younger generation of medical faculty, but medical school flexibility policies have not been fully effective. We have reported that our school’s policies are underutilized due to faculty concerns about looking uncommitted to career or team. Since policies include leaves and accommodations that reduce physical presence, faculty may fear “face-time bias,” which negatively affects evaluation of those not “seen” at work. Face-time bias is reported to negatively affect salary and career progress. We explored face-time bias on a leadership level and described development of compensation criteria intended to mitigate face-time bias, raise visibility, and reward commitment and contribution to team/group goals. Leaders from 6 partner departments participated in standardized interviews and group meetings. Ten compensation plans were analyzed, and published literature was reviewed. Leaders did not perceive face-time issues but saw team pressure and perception of availability as performance motivators. Compensation plans were multifactor productivity based with many quantifiable criteria; few addressed team contributions. Using these findings, novel compensation criteria were developed based on a published model to mitigate face-time bias associated with team perceptions. Criteria for organizational citizenship to raise visibility and reward group outcomes were included. We conclude that team pressure and perception of availability have the potential to lead to bias and may contribute to underuse of flexibility policies. Recognizing organizational citizenship and cooperative effort via specific criteria in a compensation plan may enhance a culture of flexibility. These novel criteria have been effective in one pilot department.

  11. Delivery Time Minimization in Edge Caching: Synergistic Benefits of Subspace Alignment and Zero Forcing

    KAUST Repository

    Kakar, Jaber; Alameer, Alaa; Chaaban, Anas; Sezgin, Aydin; Paulraj, Arogyaswami

    2017-01-01

    the fundamental limits of a cache-aided wireless network consisting of one central base station, $M$ transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file

  12. Minimal variation in anti-A and -B titers among healthy volunteers over time

    DEFF Research Database (Denmark)

    Sprogøe, Ulrik; Yazer, Mark; Rasmussen, Mads Hvidkjær

    2017-01-01

    BACKGROUND: Using potentially out-of-group blood components, like low titer A plasma and O whole blood, in the resuscitation of trauma patients is becoming increasingly popular. However, very little is known whether the donors’ anti-A and/or -B titers change over time and whether repeated titer m...

  13. Dinosaurs of India: Dead but Alive

    Indian Academy of Sciences (India)

    Table of contents. Dinosaurs of India: Dead but Alive · Fossils · Evolution and O2 PAL · The Science in Dinosaurs · Origin/ Extinction of Dinosaurs · PowerPoint Presentation · India –94my + 50my · Icehouse /Greenhouse through time · Global Mean Annual Temperature Distributions at 100 my · Global Mean Annual ...

  14. Design and Validation of Real-Time Optimal Control with ECMS to Minimize Energy Consumption for Parallel Hybrid Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Aiyun Gao

    2017-01-01

    Full Text Available A real-time optimal control of parallel hybrid electric vehicles (PHEVs with the equivalent consumption minimization strategy (ECMS is presented in this paper, whose purpose is to achieve the total equivalent fuel consumption minimization and to maintain the battery state of charge (SOC within its operation range at all times simultaneously. Vehicle and assembly models of PHEVs are established, which provide the foundation for the following calculations. The ECMS is described in detail, in which an instantaneous cost function including the fuel energy and the electrical energy is proposed, whose emphasis is the computation of the equivalent factor. The real-time optimal control strategy is designed through regarding the minimum of the total equivalent fuel consumption as the control objective and the torque split factor as the control variable. The validation of the control strategy proposed is demonstrated both in the MATLAB/Simulink/Advisor environment and under actual transportation conditions by comparing the fuel economy, the charge sustainability, and parts performance with other three control strategies under different driving cycles including standard, actual, and real-time road conditions. Through numerical simulations and real vehicle tests, the accuracy of the approach used for the evaluation of the equivalent factor is confirmed, and the potential of the proposed control strategy in terms of fuel economy and keeping the deviations of SOC at a low level is illustrated.

  15. Minimizing Experimental Setup Time and Effort at APS beamline 1-ID through Instrumentation Design

    Energy Technology Data Exchange (ETDEWEB)

    Benda, Erika; Almer, Jonathan; Kenesei, Peter; Mashayekhi, Ali; Okasinksi, John; Park, Jun-Sang; Ranay, Rogelio; Shastri, Sarvijt

    2016-01-01

    Sector 1-ID at the APS accommodates a number of dif-ferent experimental techniques in the same spatial enve-lope of the E-hutch end station. These include high-energy small and wide angle X-ray scattering (SAXS and WAXS), high-energy diffraction microscopy (HEDM, both near and far field modes) and high-energy X-ray tomography. These techniques are frequently combined to allow the users to obtain multimodal data, often attaining 1 μm spatial resolution and <0.05º angular resolution. Furthermore, these techniques are utilized while the sam-ple is thermo-mechanically loaded to mimic real operat-ing conditions. The instrumentation required for each of these techniques and environments has been designed and configured in a modular way with a focus on stability and repeatability between changeovers. This approach allows the end station to be more versatile, capable of collecting multi-modal data in-situ while reducing time and effort typically required for set up and alignment, resulting in more efficient beam time use. Key instrumentation de-sign features and layout of the end station are presented.

  16. A time-minimizing hybrid method for fitting complex Moessbauer spectra

    International Nuclear Information System (INIS)

    Steiner, K.J.

    2000-07-01

    The process of fitting complex Moessbauer-spectra is known to be time-consuming. The fitting process involves a mathematical model for the combined hyperfine interaction which can be solved by an iteration method only. The iteration method is very sensitive to its input-parameters. In other words, with arbitrary input-parameters it is most unlikely that the iteration method will converge. Up to now a scientist has to spent her/his time to guess appropriate input parameters for the iteration process. The idea is to replace the guessing phase by a genetic algorithm. The genetic algorithm starts with an initial population of arbitrary input parameters. Each parameter set is called an individual. The first step is to evaluate the fitness of all individuals. Afterwards the current population is recombined to form a new population. The process of recombination involves the successive application of genetic operators which are selection, crossover, and mutation. These operators mimic the process of natural evolution, i.e. the concept of the survival of the fittest. Even though there is no formal proof that the genetic algorithm will eventually converge, there is an excellent chance that there will be a population with very good individuals after some generations. The hybrid method presented in the following combines a very modern version of a genetic algorithm with a conventional least-square routine solving the combined interaction Hamiltonian i.e. providing a physical solution with the original Moessbauer parameters by a minimum of input. (author)

  17. Population dynamics of minimally cognitive individuals. Part 2: Dynamics of time-dependent knowledge

    Energy Technology Data Exchange (ETDEWEB)

    Schmieder, R.W.

    1995-07-01

    The dynamical principle for a population of interacting individuals with mutual pairwise knowledge, presented by the author in a previous paper for the case of constant knowledge, is extended to include the possibility that the knowledge is time-dependent. Several mechanisms are presented by which the mutual knowledge, represented by a matrix K, can be altered, leading to dynamical equations for K(t). The author presents various examples of the transient and long time asymptotic behavior of K(t) for populations of relatively isolated individuals interacting infrequently in local binary collisions. Among the effects observed in the numerical experiments are knowledge diffusion, learning transients, and fluctuating equilibria. This approach will be most appropriate to small populations of complex individuals such as simple animals, robots, computer networks, agent-mediated traffic, simple ecosystems, and games. Evidence of metastable states and intermittent switching leads them to envision a spectroscopy associated with such transitions that is independent of the specific physical individuals and the population. Such spectra may serve as good lumped descriptors of the collective emergent behavior of large classes of populations in which mutual knowledge is an important part of the dynamics.

  18. Timing incorporation of different green manure crops to minimize the risk of nitrogen leaching

    Directory of Open Access Journals (Sweden)

    H. KÄNKÄNEN

    2008-12-01

    Full Text Available Seven field trials at four research sites were carried out to study the effect of incorporation time of different plant materials on soil mineral N content during two successive seasons. Annual hairy vetch (Vicia villosa Roth, red clover (Trifolium pratense L., westerwold ryegrass (Lolium multiflorum Lam. var. westerwoldicum and straw residues of N-fertilized spring barley (Hordeum vulgare were incorporated into the soil by ploughing in early September, late October and the following May, and by reduced tillage in May. Delaying incorporation of the green manure crop in autumn lessened the risk of N leaching. The higher the crop N and soil NO3-N content, the greater the risk of leaching. Incorporation in the following spring, which lessened the risk of N leaching as compared with early autumn ploughing, often had an adverse effect on the growth of the succeeding crop. After spring barley, the NO3-N content of the soil tended to be high, but the timing of incorporation did not have a marked effect on soil N. With exceptionally high soil mineral N content, N leaching was best inhibited by growing westerwold ryegrass in the first experimental year. ;

  19. Dressing the dead body

    Directory of Open Access Journals (Sweden)

    Birgitta Nordström

    2016-12-01

    Full Text Available My current research focuses on textiles and rites, especially woven textiles for funerals and moments of loss. What active role can a textile such as an infant-wrapping cloth or a funeral pall play in the mourning process? This article will describe the development and current questions that address 1 the infant-wrapping cloth – the textile that is used to dress, clothe, or cover the dead body with particular attention to the question of infant mortality and the material practices of care. 2 The funeral pall that is used at funerals, draped over the coffin or as a body cover at hospital viewing rooms. One example to be presented is Kortedalakrönika (‘The Chronicle of Kortedala’, a collaborative project, woven for a church in Gothenburg. My work is based in artistic practice but opens up several scientific and existential questions.

  20. Dead of night.

    Science.gov (United States)

    Balter, Leon

    2010-07-01

    Dead of Night, the first psychoanalytic horror film, was produced in England in 1945, immediately after the end of World War II--that is, after the English population had suffered systematic Nazi terror from imminent invasion, incessant aerial bombing, and rocket-bombs. This film continued the prewar format of horror films based on themes of the supernatural and the hubris and excesses of science. However, it introduced psychoanalysis as the science in question. The film is structured on two levels: a genteel English country weekend to which witty and urbane guests have been invited; and five horror stories told by the guests. Psychoanalytic insights into this film structure are used here to explain how the film induces horror in the audience.

  1. Contribution to the minimization of time for the solution of algebraic differential equations system

    International Nuclear Information System (INIS)

    Michael, Samir.

    1982-11-01

    This note deals with the solution of large algebraic-differential systems involved in physical sciences specially in electronics and nuclear physics. The theoretical aspects for the stability of multistep methods is presented in detail. The stability condition is developed and we present our own conditions of stability. These conditions give rise to many new formulae that have very small truncation error. However for a real time simulation, it is necessary to obtain a very high computation speed. For this purpose, we have considered a multiprocessor machine and we have investigated the parallelization of the algorithm of generalized GEAR method. For a linear system, the method of GAUSS-JORDAN is used with some modifications. A new algorithm is presented for parallel matrix multiplication. This research work has been applied to the resolution of a system of equations corresponding to an experiment of gamma thermometry in a nuclear reactor (four thermometers in this case) [fr

  2. Evaluation of the minimal replication time of Cauliflower mosaic virus in different hosts

    International Nuclear Information System (INIS)

    Khelifa, Mounia; Masse, Delphine; Blanc, Stephane; Drucker, Martin

    2010-01-01

    Though the duration of a single round of replication is an important biological parameter, it has been determined for only few viruses. Here, this parameter was determined for Cauliflower mosaic virus (CaMV) in transfected protoplasts from different hosts: the highly susceptible Arabidopsis and turnip, and Nicotiana benthamiana, where CaMV accumulates only slowly. Four methods of differing sensitivity were employed: labelling of (1) progeny DNA and (2) capsid protein, (3) immunocapture PCR,, and (4) progeny-specific PCR. The first progeny virus was detected about 21 h after transfection. This value was confirmed by all methods, indicating that our estimate was not biased by the sensitivity of the detection method, and approximated the actual time required for one round of CaMV replication. Unexpectedly, the replication kinetics were similar in the three hosts; suggesting that slow accumulation of CaMV in Nicotiana plants is determined by non-optimal interactions in other steps of the infection cycle.

  3. Machine scheduling to minimize weighted completion times the use of the α-point

    CERN Document Server

    Gusmeroli, Nicoló

    2018-01-01

    This work reviews the most important results regarding the use of the α-point in Scheduling Theory. It provides a number of different LP-relaxations for scheduling problems and seeks to explain their polyhedral consequences. It also explains the concept of the α-point and how the conversion algorithm works, pointing out the relations to the sum of the weighted completion times. Lastly, the book explores the latest techniques used for many scheduling problems with different constraints, such as release dates, precedences, and parallel machines. This reference book is intended for advanced undergraduate and postgraduate students who are interested in scheduling theory. It is also inspiring for researchers wanting to learn about sophisticated techniques and open problems of the field.

  4. SU-E-P-27: Efficient Process for AccuBoost Planning and Treatment Delivery to Minimize Patient Compression Time

    Energy Technology Data Exchange (ETDEWEB)

    Iftimia, I; Talmadge, M; Halvorsen, P [Lahey Clinic, Burlington, MA (United States)

    2015-06-15

    Purpose: To implement an efficient and robust process for AccuBoost planning and treatment delivery that can be safely performed by a single Physicist while minimizing patient’s total session time. Methods: Following a thorough commissioning and validation process, templates were created in the brachytherapy planning system for each AccuBoost applicator. Tables of individual and total nominal dwell times for each applicator as a function of separation were generated to streamline planning while an Excel-based nomogram provided by the vendor functions as a secondary verification of the treatment parameters. Tables of surface dose as a function of separation and applicator, along with concise guidance documents for applicator selection, are readily available during the planning process. The entire process is described in a set of detailed Standard Operating Procedures which, in addition to the items described above, include a verbal time-out between the primary planner and the individual performing the secondary verification as well as direct visual confirmation of applicator placement using an articulated mirror. Prior to treatment initiation, a final time-out is conducted with the Radiation Oncologist. Chart documentation is finalized after the patient is released from compression following completion of the treatment. Results: With the aforementioned procedures, it has been possible to consistently limit the time required to prepare each treatment such that the patient is typically under compression for less than 10 minutes per orientation prior to the initiation of the treatment, which is particularly important for APBI cases. This process can be overseen by a single physicist assisted by a dosimetrist and has been optimized during the past 16 months, with 180 treatment sessions safely completed to date. Conclusion: This work demonstrates the implementation of an efficient and robust process for real-time-planned AccuBoost treatments that effectively minimizes

  5. Operation Cost Minimization of Droop-Controlled DC Microgrids Based on Real-Time Pricing and Optimal Power Flow

    DEFF Research Database (Denmark)

    Li, Chendan; de Bosio, Federico; Chaudhary, Sanjay Kumar

    2015-01-01

    In this paper, an optimal power flow problem is formulated in order to minimize the total operation cost by considering real-time pricing in DC microgrids. Each generation resource in the system, including the utility grid, is modeled in terms of operation cost, which combines the cost...... problem is solved in a heuristic way by using genetic algorithms. In order to test the proposed algorithm, a six-bus droop-controlled DC microgrid is used as a case-study. The obtained simulation results show that under variable renewable generation, load, and electricity prices, the proposed method can...

  6. Minimal residual HIV viremia: verification of the Abbott Real-Time HIV-1 assay sensitivity

    Directory of Open Access Journals (Sweden)

    Alessandra Amendola

    2010-06-01

    Full Text Available Introduction: In the HIV-1 infection, the increase in number of CD4 T lymphocytes and the viral load decline are the main indicators of the effectiveness of antiretroviral therapy. On average, 85% of patients receiving effective treatment has a persistent suppression of plasma viral load below the detection limit (<50 copies/mL of clinically used viral load assays, regardless of treatment regimen in use. It is known, however, that, even when viremia is reduced below the sensitivity limit of current diagnostic assays, the virus persists in “reservoirs” and traces of free virions can be detected in plasma.There is a considerable interest to investigate the clinical significance of residual viremia. Advances in molecular diagnostics allows nowadays to couple a wide dynamic range to a high sensitivity.The Abbott Real-time HIV-1 test is linear from 40 to 107 copies/mL and provides, below 40 copies/mL, additional information such as “<40cp/mL, target detected” or “target not detected”. The HIV-1 detection is verified by the max-Ratio algorithm software.We assessed the test sensitivity when the qualitative response is considered as well. Methods: A ‘probit’ analysis was performed using dilutions of the HIV-1 RNA Working Reagent 1 for NAT assays (NIBSC code: 99/634, defined in IU/mL and different from that used by the manufacturer (VQA,Virology Quality Assurance Laboratory of the AIDS Clinical Trial Group for standardization and definition of performances.The sample input volume (0.6 mL was the same used in clinical routine. A total of 196 replicates at concentrations decreasing from 120 to 5 copies/mL, in three different sessions, have been tested.The ‘probit’ analysis (binomial dose-response model, 95% “hit-rate” has been carried out on the SAS 9.1.3 software package. Results: The sensitivity of the “<40cp/mL, target detected” response was equal to 28,76 copies/mL, with 95% confidence limits between 22,19 and 52,27 copies

  7. Delivery Time Minimization in Edge Caching: Synergistic Benefits of Subspace Alignment and Zero Forcing

    KAUST Repository

    Kakar, Jaber

    2017-10-29

    An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as additional storage resources in the form of caches to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided wireless network consisting of one central base station, $M$ transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern at high signal-to-noise ratios (SNR), normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. For various special cases with $M=\\\\{1,2\\\\}$ and $K=\\\\{1,2,3\\\\}$ that satisfy $M+K\\\\leq 4$, we establish the optimal tradeoff between cache storage and latency. This is facilitated through establishing a novel converse (for arbitrary $M$ and $K$) and an achievability scheme on the NDT. Our achievability scheme is a synergistic combination of multicasting, zero-forcing beamforming and interference alignment.

  8. Vivitron dead section pumping tests

    International Nuclear Information System (INIS)

    Heugel, J.; Bayet, J.P.; Brandt, C.; Delhomme, C.; Krieg, C.; Kustner, F.; Meiss, R.; Riehl, R.; Roth, C.; Schlewer, B.; Six, P.; Weber, A.

    1990-10-01

    Pumping tests have been conducted on a simulated accelerator dead section. The behavior of different pump types are compared and analyzed. Vacuum conditions to be expected in the Vivitron are reached and several parameters are verified. Selection of a pump for the Vivitron dead section is confirmed

  9. Factors affecting fall down rates of dead aspen (Populus tremuloides) biomass following severe drought in west-central Canada.

    Science.gov (United States)

    Ted Hogg, Edward H; Michaelian, Michael

    2015-05-01

    Increases in mortality of trembling aspen (Populus tremuloides Michx.) have been recorded across large areas of western North America following recent periods of exceptionally severe drought. The resultant increase in standing, dead tree biomass represents a significant potential source of carbon emissions to the atmosphere, but the timing of emissions is partially driven by dead-wood dynamics which include the fall down and breakage of dead aspen stems. The rate at which dead trees fall to the ground also strongly influences the period over which forest dieback episodes can be detected by aerial surveys or satellite remote sensing observations. Over a 12-year period (2000-2012), we monitored the annual status of 1010 aspen trees that died during and following a severe regional drought within 25 study areas across west-central Canada. Observations of stem fall down and breakage (snapping) were used to estimate woody biomass transfer from standing to downed dead wood as a function of years since tree death. For the region as a whole, we estimated that >80% of standing dead aspen biomass had fallen after 10 years. Overall, the rate of fall down was minimal during the year following stem death, but thereafter fall rates followed a negative exponential equation with k = 0.20 per year. However, there was high between-site variation in the rate of fall down (k = 0.08-0.37 per year). The analysis showed that fall down rates were positively correlated with stand age, site windiness, and the incidence of decay fungi (Phellinus tremulae (Bond.) Bond. and Boris.) and wood-boring insects. These factors are thus likely to influence the rate of carbon emissions from dead trees following periods of climate-related forest die-off episodes. © 2014 Her Majesty the Queen in Right of Canada Global Change Biology © 2014 John Wiley & Sons Ltd Reproduced with the permission of the Minister of Natural Resources Canada.

  10. Towards Uniform Accelerometry Analysis: A Standardization Methodology to Minimize Measurement Bias Due to Systematic Accelerometer Wear-Time Variation

    Directory of Open Access Journals (Sweden)

    Tarun R. Katapally, Nazeem Muhajarine

    2014-06-01

    Full Text Available Accelerometers are predominantly used to objectively measure the entire range of activity intensities – sedentary behaviour (SED, light physical activity (LPA and moderate to vigorous physical activity (MVPA. However, studies consistently report results without accounting for systematic accelerometer wear-time variation (within and between participants, jeopardizing the validity of these results. This study describes the development of a standardization methodology to understand and minimize measurement bias due to wear-time variation. Accelerometry is generally conducted over seven consecutive days, with participants' data being commonly considered 'valid' only if wear-time is at least 10 hours/day. However, even within ‘valid’ data, there could be systematic wear-time variation. To explore this variation, accelerometer data of Smart Cities, Healthy Kids study (www.smartcitieshealthykids.com were analyzed descriptively and with repeated measures multivariate analysis of variance (MANOVA. Subsequently, a standardization method was developed, where case-specific observed wear-time is controlled to an analyst specified time period. Next, case-specific accelerometer data are interpolated to this controlled wear-time to produce standardized variables. To understand discrepancies owing to wear-time variation, all analyses were conducted pre- and post-standardization. Descriptive analyses revealed systematic wear-time variation, both between and within participants. Pre- and post-standardized descriptive analyses of SED, LPA and MVPA revealed a persistent and often significant trend of wear-time’s influence on activity. SED was consistently higher on weekdays before standardization; however, this trend was reversed post-standardization. Even though MVPA was significantly higher on weekdays both pre- and post-standardization, the magnitude of this difference decreased post-standardization. Multivariable analyses with standardized SED, LPA and

  11. Comprehensive Cost Minimization in Distribution Networks Using Segmented-time Feeder Reconfiguration and Reactive Power Control of Distributed Generators

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Chen, Zhe

    2016-01-01

    In this paper, an efficient methodology is proposed to deal with segmented-time reconfiguration problem of distribution networks coupled with segmented-time reactive power control of distributed generators. The target is to find the optimal dispatching schedule of all controllable switches...... and distributed generators’ reactive powers in order to minimize comprehensive cost. Corresponding constraints, including voltage profile, maximum allowable daily switching operation numbers (MADSON), reactive power limits, and so on, are considered. The strategy of grouping branches is used to simplify...... (FAHPSO) is implemented in VC++ 6.0 program language. A modified version of the typical 70-node distribution network and several real distribution networks are used to test the performance of the proposed method. Numerical results show that the proposed methodology is an efficient method for comprehensive...

  12. Youth in Dead End

    Directory of Open Access Journals (Sweden)

    Duygu TANRIKULU

    2011-01-01

    Full Text Available The primary factor to ensure economic and social development and also to build a healthy society is the education system which plays a significant role in human capital formation and shapes the social structure and its outputs. In this context, there are some risks threatening the youth that is trying to position itself on the education-employment line and some critical areas in need of national policy intervention as well. Hence, by analyzing indicators on education and labor force, this study aims to reveal the amount of youth under risk and to identify these critical areas, while targeting to highlight the urgent need for policy development focusing on youth in dead end. Within the study, it is emphasized that the education system causes youth to face with the problems of access and quality, and that there is a significant amount of youth not in education and employment, while underlining the necessity of bringing especially this inactive youth in economy in addition to equipping with required qualifications for their active participation in social life. Thus, in order to hinder human capital loss additionally, there is policy need in two directions, as focusing on the education system to prevent new hopeless generations on the one hand, and on the inclusion of the disadvantaged youth on the other.

  13. COMPARATIVE ANALYSIS OF TECHNILOGIES OF CHITOSAN PRODUCTION FROM DEAD BEES

    Directory of Open Access Journals (Sweden)

    Marina Abramova

    2017-07-01

    Full Text Available Objective: The aim of this work is to study the characteristics of technology of chitosan obtaining from unconventional sources, namely from dead bees. Methods: The article considers three methods of chitosan obtaining from dead bees, namely the technology with the usage of dead bees with low degree of drying; the technology with the usage of dead bees with high degree of drying; the technology with the usage of dead bees with high degree of drying but without separation of deproteination and deacetylation stages. Results: It is proved that the technology with the usage of dead bees with high degree of drying but without separation of deproteination and deacetylation stages does not require high temperatures and long time. Yield of chitosan with the use of this technology is 21-24%. Discussion: The expediency of dead bees usage as raw material for the production of chitosan in Ukraine is shown. The technologies of chitosan obtaining from dead bees are compared, the most efficient one is chosen, which provide the highest yield of the finished product, so it is the most promising for the application in practice.

  14. Dynamic optimization of dead-end membrane filtration

    NARCIS (Netherlands)

    Blankert, B.; Betlem, Bernardus H.L.; Roffel, B.; Marquardt, Wolfgang; Pantelides, Costas

    2006-01-01

    An operating strategy aimed at minimizing the energy consumption during the filtration phase of dead-end membrane filtration has been formulated. A method allowing fast calculation of trajectories is used to allow incorporation in a hierarchical optimization scheme. The optimal trajectory can be

  15. Dead wood for biodiversity - foresters torn between mistrust and commitment

    International Nuclear Information System (INIS)

    Deuffic, Philippe

    2010-01-01

    Dead wood is a key element in forest biodiversity, which is used as one of the indicators for sustainable development of forests. A survey was conducted among foresters and users in the Landes de Gascogne and ile-de-France areas so as to assess practises and social representations associated with dead wood. From the results of the survey, it appears that there is a diversity of practices and divergences about the implications connected with dead wood. The 64 respondents can be divided into roughly six groups (G1: 'industrial foresters', G2: the 'silvicultural foresters', G3: the 'remote foresters', G4: the 'environmentalist foresters', G5: the 'naturalists' and G6: the 'users'). Among other things, they can be differentiated by their management practises, their degree of knowledge about and concern with ecology, their social networks, their aesthetic judgment, their perception of risks and their economic requirements. While underscoring the scarce popularity on average of the biodiversity-related issues, this sociological survey also highlights: the need for a minimal regulatory framework to achieve integrated retention of dead wood, the serious concern of forest managers in the Landes with plant health risks associated with dead wood, and the need for a functional justification for keeping dead wood in the ecosystem. (authors)

  16. Evaluation of Timing and Dosage of a Parent-Based Intervention to Minimize College Students’ Alcohol Consumption

    Science.gov (United States)

    Turrisi, Rob; Mallett, Kimberly A.; Cleveland, Michael J.; Varvil-Weld, Lindsey; Abar, Caitlin; Scaglione, Nichole; Hultgren, Brittney

    2013-01-01

    Objective: The study evaluated the timing and dosage of a parent-based intervention to minimize alcohol consumption for students with varying drinking histories. Method: First-year students (N = 1,900) completed Web assessments during the summer before college (baseline) and two follow-ups (fall of first and second years). Students were randomized to one of four conditions (pre-college matriculation [PCM], pre-college matriculation plus boosters [PCM+B], after college matriculation [ACM], and control conditions). Seven indicators of drinking (drink in past month, been drunk in past month, weekday [Sunday to Wednesday] drinking, Thursday drinking, weekend [Friday, Saturday] drinking, heavy episodic drinking in past 2 weeks, and peak blood alcohol concentration students. PMID:23200148

  17. Evaluation of timing and dosage of a parent-based intervention to minimize college students' alcohol consumption.

    Science.gov (United States)

    Turrisi, Rob; Mallett, Kimberly A; Cleveland, Michael J; Varvil-Weld, Lindsey; Abar, Caitlin; Scaglione, Nichole; Hultgren, Brittney

    2013-01-01

    The study evaluated the timing and dosage of a parent-based intervention to minimize alcohol consumption for students with varying drinking histories. First-year students (N = 1,900) completed Web assessments during the summer before college (baseline) and two follow-ups (fall of first and second years). Students were randomized to one of four conditions (pre-college matriculation [PCM], pre-college matriculation plus boosters [PCM+B], after college matriculation [ACM], and control conditions). Seven indicators of drinking (drink in past month, been drunk in past month, weekday [Sunday to Wednesday] drinking, Thursday drinking, weekend [Friday, Saturday] drinking, heavy episodic drinking in past 2 weeks, and peak blood alcohol concentration students.

  18. And the Dead Remain Behind

    Directory of Open Access Journals (Sweden)

    Peter Read

    2013-08-01

    Full Text Available In most cultures the dead and their living relatives are held in a dialogic relationship. The dead have made it clear, while living, what they expect from their descendants. The living, for their part, wish to honour the tombs of their ancestors; at the least, to keep the graves of the recent dead from disrepair. Despite the strictures, the living can fail their responsibilities, for example, by migration to foreign countries. The peripatetic Chinese are one of the few cultures able to overcome the dilemma of the wanderer or the exile. With the help of a priest, an Australian Chinese migrant may summon the soul of an ancestor from an Asian grave to a Melbourne temple, where the spirit, though removed from its earthly vessel, will rest and remain at peace. Amongst cultures in which such practices are not culturally appropriate, to fail to honour the family dead can be exquisitely painful. Violence is the cause of most failure.

  19. Anthrax, People and Dead Hippos

    Centers for Disease Control (CDC) Podcasts

    2017-11-07

    Epidemiologist, Dr. Melissa Marx, discuses anthrax deaths in people who ate dead hippos.  Created: 11/7/2017 by National Center for Emerging and Zoonotic Infectious Diseases (NCEZID).   Date Released: 11/7/2017.

  20. Multi-Time Scale Coordinated Scheduling Strategy with Distributed Power Flow Controllers for Minimizing Wind Power Spillage

    Directory of Open Access Journals (Sweden)

    Yi Tang

    2017-11-01

    Full Text Available The inherent variability and randomness of large-scale wind power integration have brought great challenges to power flow control and dispatch. The distributed power flow controller (DPFC has the higher flexibility and capacity in power flow control in the system with wind generation. This paper proposes a multi-time scale coordinated scheduling model with DPFC to minimize wind power spillage. Configuration of DPFCs is initially determined by stochastic method. Afterward, two sequential procedures containing day-head and real-time scales are applied for determining maximum schedulable wind sources, optimal outputs of generating units and operation setting of DPFCs. The generating plan is obtained initially in day-ahead scheduling stage and modified in real-time scheduling model, while considering the uncertainty of wind power and fast operation of DPFC. Numerical simulation results in IEEE-RTS79 system illustrate that wind power is maximum scheduled with the optimal deployment and operation of DPFC, which confirms the applicability and effectiveness of the proposed method.

  1. A Design of Real-time Automatic Focusing System for Digital Still Camera Using the Passive Sensor Error Minimization

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K.S. [Samsung Techwin Co., Ltd., Seoul (Korea); Kim, D.Y. [Bucheon College, Bucheon (Korea); Kim, S.H. [University of Seoul, Seoul (Korea)

    2002-05-01

    In this paper, the implementation of a new AF(Automatic Focusing) system for a digital still camera is introduced. The proposed system operates in real-time while adjusting focus after the measurement of distance to an object using a passive sensor, which is different from a typical method. In addition, measurement errors were minimized by using the data acquired empirically, and the optimal measuring time was obtained using EV(Exposure Value) which is calculated from CCD luminance signal. Moreover, this system adopted an auxiliary light source for focusing in absolute dark conditions, which is very hard for CCD image processing. Since this is an open-loop system adjusting focus immediately after the distance measurement, it guarantees real-time operation. The performance of this new AF system was verified by comparing the focusing value curve obtained from AF experiment with the one from the measurement by MF(Manual-Focusing). In both case, edge detector was used for various objects and backgrounds. (author). 9 refs., 11 figs., 5 tabs.

  2. A Hybrid Metaheuristic Approach for Minimizing the Total Flow Time in A Flow Shop Sequence Dependent Group Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Antonio Costa

    2014-07-01

    Full Text Available Production processes in Cellular Manufacturing Systems (CMS often involve groups of parts sharing the same technological requirements in terms of tooling and setup. The issue of scheduling such parts through a flow-shop production layout is known as the Flow-Shop Group Scheduling (FSGS problem or, whether setup times are sequence-dependent, the Flow-Shop Sequence-Dependent Group Scheduling (FSDGS problem. This paper addresses the FSDGS issue, proposing a hybrid metaheuristic procedure integrating features from Genetic Algorithms (GAs and Biased Random Sampling (BRS search techniques with the aim of minimizing the total flow time, i.e., the sum of completion times of all jobs. A well-known benchmark of test cases, entailing problems with two, three, and six machines, is employed for both tuning the relevant parameters of the developed procedure and assessing its performances against two metaheuristic algorithms recently presented by literature. The obtained results and a properly arranged ANOVA analysis highlight the superiority of the proposed approach in tackling the scheduling problem under investigation.

  3. Minimizing the Carbon Footprint for the Time-Dependent Heterogeneous-Fleet Vehicle Routing Problem with Alternative Paths

    Directory of Open Access Journals (Sweden)

    Wan-Yu Liu

    2014-07-01

    Full Text Available Torespondto the reduction of greenhouse gas emissions and global warming, this paper investigates the minimal-carbon-footprint time-dependent heterogeneous-fleet vehicle routing problem with alternative paths (MTHVRPP. This finds a route with the smallestcarbon footprint, instead of the shortestroute distance, which is the conventional approach, to serve a number of customers with a heterogeneous fleet of vehicles in cases wherethere may not be only one path between each pair of customers, and the vehicle speed differs at different times of the day. Inheriting from the NP-hardness of the vehicle routing problem, the MTHVRPP is also NP-hard. This paper further proposes a genetic algorithm (GA to solve this problem. The solution representedbyour GA determines the customer serving ordering of each vehicle type. Then, the capacity check is used to classify multiple routes of each vehicle type, and the path selection determines the detailed paths of each route. Additionally, this paper improves the energy consumption model used for calculating the carbon footprint amount more precisely. Compared with the results without alternative paths, our experimental results show that the alternative path in this experimenthas a significant impact on the experimental results in terms of carbon footprint.

  4. Viscous Corrections of the Time Incremental Minimization Scheme and Visco-Energetic Solutions to Rate-Independent Evolution Problems

    Science.gov (United States)

    Minotti, Luca; Savaré, Giuseppe

    2018-02-01

    We propose the new notion of Visco-Energetic solutions to rate-independent systems {(X, E,} d) driven by a time dependent energy E and a dissipation quasi-distance d in a general metric-topological space X. As for the classic Energetic approach, solutions can be obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation quasi-distance d is incremented by a viscous correction {δ} (for example proportional to the square of the distance d), which penalizes far distance jumps by inducing a localized version of the stability condition. We prove a general convergence result and a typical characterization by Stability and Energy Balance in a setting comparable to the standard energetic one, thus capable of covering a wide range of applications. The new refined Energy Balance condition compensates for the localized stability and provides a careful description of the jump behavior: at every jump the solution follows an optimal transition, which resembles in a suitable variational sense the discrete scheme that has been implemented for the whole construction.

  5. Late-time acceleration and phantom divide line crossing with non-minimal coupling and Lorentz-invariance violation

    International Nuclear Information System (INIS)

    Nozari, Kourosh; Sadatian, S.D.

    2008-01-01

    We consider two alternative dark-energy models: a Lorentz-invariance preserving model with a non-minimally coupled scalar field and a Lorentz-invariance violating model with a minimally coupled scalar field. We study accelerated expansion and the dynamics of the equation of state parameter in these scenarios. While a minimally coupled scalar field does not have the capability to be a successful dark-energy candidate with line crossing of the cosmological constant, a non-minimally coupled scalar field in the presence of Lorentz invariance or a minimally coupled scalar field with Lorentz-invariance violation have this capability. In the latter case, accelerated expansion and phantom divide line crossing are the results of the interactive nature of this Lorentz-violating scenario. (orig.)

  6. Similarity measure and topology evolution of foreign exchange markets using dynamic time warping method: Evidence from minimal spanning tree

    Science.gov (United States)

    Wang, Gang-Jin; Xie, Chi; Han, Feng; Sun, Bo

    2012-08-01

    In this study, we employ a dynamic time warping method to study the topology of similarity networks among 35 major currencies in international foreign exchange (FX) markets, measured by the minimal spanning tree (MST) approach, which is expected to overcome the synchronous restriction of the Pearson correlation coefficient. In the empirical process, firstly, we subdivide the analysis period from June 2005 to May 2011 into three sub-periods: before, during, and after the US sub-prime crisis. Secondly, we choose NZD (New Zealand dollar) as the numeraire and then, analyze the topology evolution of FX markets in terms of the structure changes of MSTs during the above periods. We also present the hierarchical tree associated with the MST to study the currency clusters in each sub-period. Our results confirm that USD and EUR are the predominant world currencies. But USD gradually loses the most central position while EUR acts as a stable center in the MST passing through the crisis. Furthermore, an interesting finding is that, after the crisis, SGD (Singapore dollar) becomes a new center currency for the network.

  7. Modeling decay rates of dead wood in a neotropical forest.

    Science.gov (United States)

    Hérault, Bruno; Beauchêne, Jacques; Muller, Félix; Wagner, Fabien; Baraloto, Christopher; Blanc, Lilian; Martin, Jean-Michel

    2010-09-01

    Variation of dead wood decay rates among tropical trees remains one source of uncertainty in global models of the carbon cycle. Taking advantage of a broad forest plot network surveyed for tree mortality over a 23-year period, we measured the remaining fraction of boles from 367 dead trees from 26 neotropical species widely varying in wood density (0.23-1.24 g cm(-3)) and tree circumference at death time (31.5-272.0 cm). We modeled decay rates within a Bayesian framework assuming a first order differential equation to model the decomposition process and tested for the effects of forest management (selective logging vs. unexploited), of mode of death (standing vs. downed) and of topographical levels (bottomlands vs. hillsides vs. hilltops) on wood decay rates. The general decay model predicts the observed remaining fraction of dead wood (R2 = 60%) with only two biological predictors: tree circumference at death time and wood specific density. Neither selective logging nor local topography had a differential effect on wood decay rates. Including the mode of death into the model revealed that standing dead trees decomposed faster than downed dead trees, but the gain of model accuracy remains rather marginal. Overall, these results suggest that the release of carbon from tropical dead trees to the atmosphere can be simply estimated using tree circumference at death time and wood density.

  8. Minimal surfaces

    CERN Document Server

    Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht

    2010-01-01

    Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently

  9. A Practical and Robust Execution Time-Frame Procedure for the Multi-Mode Resource-Constrained Project Scheduling Problem with Minimal and Maximal Time Lags

    Directory of Open Access Journals (Sweden)

    Angela Hsiang-Ling Chen

    2016-09-01

    Full Text Available Modeling and optimizing organizational processes, such as the one represented by the Resource-Constrained Project Scheduling Problem (RCPSP, improve outcomes. Based on assumptions and simplification, this model tackles the allocation of resources so that organizations can continue to generate profits and reinvest in future growth. Nonetheless, despite all of the research dedicated to solving the RCPSP and its multi-mode variations, there is no standardized procedure that can guide project management practitioners in their scheduling tasks. This is mainly because many of the proposed approaches are either based on unrealistic/oversimplified scenarios or they propose solution procedures not easily applicable or even feasible in real-life situations. In this study, we solve a more true-to-life and complex model, Multimode RCPSP with minimal and maximal time lags (MRCPSP/max. The complexity of the model solved is presented, and the practicality of the proposed approach is justified depending on only information that is available for every project regardless of its industrial context. The results confirm that it is possible to determine a robust makespan and to calculate an execution time-frame with gaps lower than 11% between their lower and upper bounds. In addition, in many instances, the solved lower bound obtained was equal to the best-known optimum.

  10. Continuous-Time Portfolio Selection and Option Pricing under Risk-Minimization Criterion in an Incomplete Market

    Directory of Open Access Journals (Sweden)

    Xinfeng Ruan

    2013-01-01

    Full Text Available We study option pricing with risk-minimization criterion in an incomplete market where the dynamics of the risky underlying asset are governed by a jump diffusion equation. We obtain the Radon-Nikodym derivative in the minimal martingale measure and a partial integrodifferential equation (PIDE of European call option. In a special case, we get the exact solution for European call option by Fourier transformation methods. Finally, we employ the pricing kernel to calculate the optimal portfolio selection by martingale methods.

  11. The deadly progress

    International Nuclear Information System (INIS)

    Drewermann, E.

    1981-01-01

    The extent to which man of modern times destroys nature has reached a breathtaking speed nowadays. The language of facts is clear. The author of the present study considers a radical change of the present conception of the world and the human being as the only salvation from a total catastrophe. 'Crisis of the environment' is, however, no technical but a religious and an intellectual problem. This open-minded and explosive book does just as little stop at false moving forces of Christianity with regard to its anthorpocentric philosophy as it stops at the modern type of man being atheistic in his basic nature. (orig.) [de

  12. The living dead

    DEFF Research Database (Denmark)

    Petersen, Morten Rask

    2017-01-01

    in a leisure time activity. These transformative experiences appear in different categories where students find value both towards biological content, societal value and individual identity. Finally there is a discussion on how to transfer characteristics from the setting of this study to other educational......This study considers how students change their coherent conceptual understanding of natural selection through a hands-on simulation. The results show that most students change their understanding. In addition, some students also underwent a transformative experience and used their new knowledge...

  13. 7 CFR 322.29 - Dead bees.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 5 2010-01-01 2010-01-01 false Dead bees. 322.29 Section 322.29 Agriculture..., DEPARTMENT OF AGRICULTURE BEES, BEEKEEPING BYPRODUCTS, AND BEEKEEPING EQUIPMENT Importation and Transit of Restricted Articles § 322.29 Dead bees. (a) Dead bees imported into or transiting the United States must be...

  14. Nonlinear dead water resistance at subcritical speed

    Science.gov (United States)

    Grue, John

    2015-08-01

    The dead water resistance F 1 = /1 2 C d w ρ S U 2 (ρ fluid density, U ship speed, S wetted body surface, Cdw resistance coefficient) on a ship moving at subcritical speed along the upper layer of a two-layer fluid is calculated by a strongly nonlinear method assuming potential flow in each layer. The ship dimensions correspond to those of the Polar ship Fram. The ship draught, b0, is varied in the range 0.25h0-0.9h0 (h0 the upper layer depth). The calculations show that Cdw/(b0/h0)2 depends on the Froude number only, in the range close to critical speed, Fr = U/c0 ˜ 0.875-1.125 (c0 the linear internal long wave speed), irrespective of the ship draught. The function Cdw/(b0/h0)2 attains a maximum at subcritical Froude number depending on the draught. Maximum Cdw/(b0/h0)2 becomes 0.15 for Fr = 0.76, b0/h0 = 0.9, and 0.16 for Fr = 0.74, b0/h0 = 1, where the latter extrapolated value of the dead water resistance coefficient is about 60 times higher than the frictional drag coefficient and relevant for the historical dead water observations. The nonlinear Cdw significantly exceeds linear theory (Fr < 0.85). The ship generated waves have a wave height comparable to the upper layer depth. Calculations of three-dimensional wave patterns at critical speed compare well to available laboratory experiments. Upstream solitary waves are generated in a wave tank of finite width, when the layer depths differ, causing an oscillation of the force. In a wide ocean, a very wide wave system develops at critical speed. The force approaches a constant value for increasing time.

  15. Electronic fingerprinting of the dead.

    Science.gov (United States)

    Rutty, G N; Stringer, K; Turk, E E

    2008-01-01

    To date, a number of methods exist for the capture of fingerprints from cadavers that can then be used in isolation as a primary method for the identification of the dead. We report the use of a handheld, mobile wireless unit used in conjunction with a personal digital assistant (PDA) device for the capture of fingerprints from the dead. We also consider a handheld single-digit fingerprint scanner that utilises a USB laptop connection for the electronic capture of cadaveric fingerprints. Both are single-operator units that, if ridge detail is preserved, can collect a 10-set of finger pad prints in approximately 45 and 90 s, respectively. We present our observations on the restrictions as to when such devices can be used with cadavers. We do, however, illustrate that the images are of sufficient quality to allow positive identification from finger pad prints of the dead. With the development of mobile, handheld, biometric, PDA-based units for the police, we hypothesize that, under certain circumstances, devices such as these could be used for the accelerated acquisition of fingerprint identification data with the potential for rapid near-patient identification in the future.

  16. Assessment and management of dead-wood habitat

    Science.gov (United States)

    Hagar, Joan

    2007-01-01

    The Bureau of Land Management (BLM) is in the process of revising its resource management plans for six districts in western and southern Oregon as the result of the settlement of a lawsuit brought by the American Forest Resource Council. A range of management alternatives is being considered and evaluated including at least one that will minimize reserves on O&C lands. In order to develop the bases for evaluating management alternatives, the agency needs to derive a reasonable range of objectives for key issues and resources. Dead-wood habitat for wildlife has been identified as a key resource for which decision-making tools and techniques need to be refined and clarified. Under the Northwest Forest Plan, reserves were to play an important role in providing habitat for species associated with dead wood (U.S. Department of Agriculture Forest Service and U.S. Department of the Interior Bureau of Land Management, 1994). Thus, the BLM needs to: 1) address the question of how dead wood will be provided if reserves are not included as a management strategy in the revised Resource Management Plan, and 2) be able to evaluate the effects of alternative land management approaches. Dead wood has become an increasingly important conservation issue in managed forests, as awareness of its function in providing wildlife habitat and in basic ecological processes has dramatically increased over the last several decades (Laudenslayer et al., 2002). A major concern of forest managers is providing dead wood habitat for terrestrial wildlife. Wildlife in Pacific Northwest forests have evolved with disturbances that create large amounts of dead wood; so, it is not surprising that many species are closely associated with standing (snags) or down, dead wood. In general, the occurrence or abundance of one-quarter to one-third of forest-dwelling vertebrate wildlife species, is strongly associated with availability of suitable dead-wood habitat (Bunnell et al., 1999; Rose et al., 2001). In

  17. Real time implementation of viable torque and flux controllers and torque ripple minimization algorithm for induction motor drive

    International Nuclear Information System (INIS)

    Vasudevan, M.; Arumugam, R.; Paramasivam, S.

    2006-01-01

    Field oriented control (FOC) and direct torque control (DTC) are becoming the industrial standards for induction motors torque and flux control. This paper aims to give a contribution for a detailed comparison between these two control techniques, emphasizing their advantages and disadvantages. The performance of these two control schemes is evaluated in terms of torque and flux ripple and their transient response to step variations of the torque command. Moreover, a new torque and flux ripple minimization technique is also proposed to improve the performance of the DTC drive. Based on the experimental results, the analysis has been presented

  18. Minimization of the external heating power by long fusion power rise-up time for self-ignition access in the helical reactor FFHR2m

    International Nuclear Information System (INIS)

    Mitarai, O.; Sagara, A.; Chikaraishi, H.; Imagawa, S.; Shishkin, A.A.; Motojima, O.

    2006-10-01

    Minimization of the external heating power to access self-ignition is advantageous to increase the reactor design flexibility and to reduce the capital and operating costs of the plasma heating device in a helical reactor. In this work we have discovered that a larger density limit leads to a smaller value of the required confinement enhancement factor, lower density limit margin reduces the external heating power, and over 300 s of the fusion power rise-up time makes it possible to reach a minimized heating power. While the fusion power rise-up time in a tokamak is limited by the OH transformer flux or the current drive capability, any fusion power rise-up time can be employed in a helical reactor for reducing the thermal stresses of the blanket and shields, because the confinement field is generated by the external helical coils. (author)

  19. Are Brain Dead Individuals Dead? Grounds for Reasonable Doubt.

    Science.gov (United States)

    Brugger, E Christian

    2016-06-01

    According to the biological definition of death, a human body that has not lost the capacity to holistically organize itself is the body of a living human individual. Reasonable doubt against the conclusion that it has lost the capacity exists when the body appears to express it and no evidence to the contrary is sufficient to rule out reasonable doubt against the conclusion that the apparent expression is a true expression (i.e., when the conclusion that what appears to be holistic organization is in fact holistic organization remains a reasonable explanatory hypothesis in light of the best evidence to the contrary). This essay argues that the evidence and arguments against the conclusion that the signs of complex bodily integration exhibited in ventilated brain dead bodies are true expressions of somatic integration are unpersuasive; that is, they are not adequate to exclude reasonable doubt against the conclusion that BD bodies are dead. Since we should not treat as corpses what for all we know might be living human beings, it follows that we have an obligation to treat BD individuals as if they were living human beings. © The Author 2016. Published by Oxford University Press, on behalf of the Journal of Medicine and Philosophy Inc. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. EVALUATION OF SETTING TIME OF MINERAL TRIOXIDE AGGREGATE AND BIODENTINE IN THE PRESENCE OF HUMAN BLOOD AND MINIMAL ESSENTIAL MEDIA - AN IN VITRO STUDY

    Directory of Open Access Journals (Sweden)

    Gopi Krishna Reddy Moosani

    2017-12-01

    Full Text Available BACKGROUND The aim of this study was to compare the ability of MTA and Biodentine to set in the presence of human blood and minimal essential media. MATERIALS AND METHODS Eighty 1 x 3 inches plexi glass sheets were taken. In each sheet, 10 wells were created and divided into 10 groups. Odd number groups were filled with MTA and even groups were filled with Biodentine. Within these groups 4 groups were control groups and the remaining 6 groups were experimental groups (i.e., blood, minimal essential media, blood and minimal essential media. Each block was submerged for 4, 5, 6, 8, 24, 36, and 48 hours in an experimental liquid at 370C with 100% humidity. RESULTS The setting times varied for the 2 materials, with contrasting differences in the setting times between MTA and Biodentine samples. Majority of the MTA samples did not set until 24 hrs. but at 36 hours all the samples of MTA are set. While for Biodentine samples, all of them had set by 6 hours. There is a significant difference in setting time between MTA and Biodentine. CONCLUSION This outcome draws into question the proposed setting time given by each respective manufacturer. Furthermore, despite Biodentine being marketed as a direct competitor to MTA with superior handling properties, MTA consistently set at a faster rate under the conditions of this study.

  1. Monitoring the Dead Sea Region by Multi-Parameter Stations

    Science.gov (United States)

    Mohsen, A.; Weber, M. H.; Kottmeier, C.; Asch, G.

    2015-12-01

    The Dead Sea Region is an exceptional ecosystem whose seismic activity has influenced all facets of the development, from ground water availability to human evolution. Israelis, Palestinians and Jordanians living in the Dead Sea region are exposed to severe earthquake hazard. Repeatedly large earthquakes (e.g. 1927, magnitude 6.0; (Ambraseys, 2009)) shook the whole Dead Sea region proving that earthquake hazard knows no borders and damaging seismic events can strike anytime. Combined with the high vulnerability of cities in the region and with the enormous concentration of historical values this natural hazard results in an extreme earthquake risk. Thus, an integration of earthquake parameters at all scales (size and time) and their combination with data of infrastructure are needed with the specific aim of providing a state-of-the-art seismic hazard assessment for the Dead Sea region as well as a first quantitative estimate of vulnerability and risk. A strong motivation for our research is the lack of reliable multi-parameter ground-based geophysical information on earthquakes in the Dead Sea region. The proposed set up of a number of observatories with on-line data access will enable to derive the present-day seismicity and deformation pattern in the Dead Sea region. The first multi-parameter stations were installed in Jordan, Israel and Palestine for long-time monitoring. All partners will jointly use these locations. All stations will have an open data policy, with the Deutsches GeoForschungsZentrum (GFZ, Potsdam, Germany) providing the hard and software for real-time data transmission via satellite to Germany, where all partners can access the data via standard data protocols.

  2. Water quality modeling in the dead end sections of drinking water distribution networks.

    Science.gov (United States)

    Abokifa, Ahmed A; Yang, Y Jeffrey; Lo, Cynthia S; Biswas, Pratim

    2016-02-01

    Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Water quality models developed so far apply spatial aggregation and temporal averaging techniques for hydraulic parameters by assigning hourly averaged water demands to the main nodes of the network. Although this practice has generally resulted in minimal loss of accuracy for the predicted disinfectant concentrations in main water transmission lines, this is not the case for the peripheries of the distribution network. This study proposes a new approach for simulating disinfectant residuals in dead end pipes while accounting for both spatial and temporal variability in hydraulic and transport parameters. A stochastic demand generator was developed to represent residential water pulses based on a non-homogenous Poisson process. Dispersive solute transport was considered using highly dynamic dispersion rates. A genetic algorithm was used to calibrate the axial hydraulic profile of the dead-end pipe based on the different demand shares of the withdrawal nodes. A parametric sensitivity analysis was done to assess the model performance under variation of different simulation parameters. A group of Monte-Carlo ensembles was carried out to investigate the influence of spatial and temporal variations in flow demands on the simulation accuracy. A set of three correction factors were analytically derived to adjust residence time, dispersion rate and wall demand to overcome simulation error caused by spatial aggregation approximation. The current model results show better agreement with field-measured concentrations of conservative fluoride tracer and free chlorine disinfectant than the simulations of recent advection dispersion reaction models published in the literature. Accuracy of the simulated

  3. Net carbon flux of dead wood in forests of the Eastern US.

    Science.gov (United States)

    Woodall, C W; Russell, M B; Walters, B F; D'Amato, A W; Fraver, S; Domke, G M

    2015-03-01

    Downed dead wood (DDW) in forest ecosystems is a C pool whose net flux is governed by a complex of natural and anthropogenic processes and is critical to the management of the entire forest C pool. As empirical examination of DDW C net flux has rarely been conducted across large scales, the goal of this study was to use a remeasured inventory of DDW C and ancillary forest attributes to assess C net flux across forests of the Eastern US. Stocks associated with large fine woody debris (diameter 2.6-7.6 cm) decreased over time (-0.11 Mg ha(-1) year(-1)), while stocks of larger-sized coarse DDW increased (0.02 Mg ha(-1) year(-1)). Stocks of total DDW C decreased (-0.14 Mg ha(-1) year(-1)), while standing dead and live tree stocks both increased, 0.01 and 0.44 Mg ha(-1) year(-1), respectively. The spatial distribution of DDW C stock change was highly heterogeneous with random forests model results indicating that management history, live tree stocking, natural disturbance, and growing degree days only partially explain stock change. Natural disturbances drove substantial C transfers from the live tree pool (≈-4 Mg ha(-1) year(-1)) to the standing dead tree pool (≈3 Mg ha(-1) year(-1)) with only a minimal increase in DDW C stocks (≈1 Mg ha(-1) year(-1)) in lower decay classes, suggesting a delayed transfer of C to the DDW pool. The assessment and management of DDW C flux is complicated by the diversity of natural and anthropogenic forces that drive their dynamics with the scale and timing of flux among forest C pools remaining a large knowledge gap.

  4. Minimally invasive surgical approaches offer earlier time to adjuvant chemotherapy but not improved survival in resected pancreatic cancer.

    Science.gov (United States)

    Mirkin, Katelin A; Greenleaf, Erin K; Hollenbeak, Christopher S; Wong, Joyce

    2018-05-01

    Pancreatic surgery encompasses complex operations with significant potential morbidity. Greater experience in minimally invasive surgery (MIS) has allowed resections to be performed laparoscopically and robotically. This study evaluates the impact of surgical approach in resected pancreatic cancer. The National Cancer Data Base (2010-2012) was reviewed for patients with stages 1-3 resected pancreatic carcinoma. Open approaches were compared to MIS. A sub-analysis was then performed comparing robotic and laparoscopic approaches. Of the 9047 patients evaluated, surgical approach was open in 7511 (83%), laparoscopic in 992 (11%), and robotic in 131 (1%). The laparoscopic and robotic conversion rate to open was 28% (n = 387) and 17% (n = 26), respectively. Compared to open, MIS was associated with more distal resections (13.5, 24.3%, respectively, p offered significantly shorter LOS in all types. Multivariate analysis demonstrated no survival benefit for any MIS approach relative to open (all, p > 0.05). When adjusted for patient, disease, and treatment characteristics, TTC was not an independent prognostic factor (HR 1.09, p = 0.084). MIS appears to offer comparable surgical oncologic benefit with improved LOS and shorter TTC. This effect, however, was not associated with improved survival.

  5. Minimally disruptive medicine is needed for patients with multimorbidity: time to develop computerised medical record systems to meet this requirement

    Directory of Open Access Journals (Sweden)

    Peter Schattner

    2015-02-01

    Full Text Available Background Minimally disruptive medicine (MDM is proposed as a method for more appropriately managing people with multiple chronic disease. Much clinical management is currently single disease focussed, with people with multimorbidity being managed according to multiple single disease guidelines. Current initiatives to improve care include education about individual conditions and creating an environment where multiple guidelines might be simultaneously supported. The patientcentred medical home (PCMH is an example of the latter. However, educational programmes and PCMH may increase the burden on patients.Problem The cumulative workload for patients in managing the impact of multiple disease-specific guidelines is only relatively recently recognised. There is an intellectual vacuum as to how best to manage multimorbidity and how informatics might support implementing MDM. There is currently no alternative to multiple singlecondition- specific guidelines and a lack of certainty, should the treatment burden need to be reduced, as to which guideline might be ‘dropped’.Action The best information about multimorbidity is recorded in primary care computerised medical record (CMR systems and in an increasing number of integrated care organisations. CMR systems have the potential to flag individuals who might be in greatest need. However, CMR systems may also provide insights into whether there are ameliorating factors that might make it easier for them to be resilient to the burden of care. Data from such CMR systems might be used to develop the evidence base about how to better manage multimorbidity.Conclusions There is potential for these information systems to help reduce the management burden on patients and clinicians. However, substantial investment in research-driven CMR development is needed if we are to achieve this.

  6. Eating the dead in Madagascar | Campbell | South African Medical ...

    African Journals Online (AJOL)

    They may be supported in societies under stress or in times of famine, to reflect aggression and antisocial behaviour (in cases where the bodies of enemies killed in battle or people who have harmed the family are eaten), or to honour a dead kinsman. It was, for example, noted in Madagascar during the imperial campaigns ...

  7. Down dead wood statistics for Maine timberlands, 1995

    Science.gov (United States)

    Linda S. Heath; David C. Chojnacky; David C. Chojnacky

    2001-01-01

    Down dead wood (DDW) is important for its role in carbon and nutrient cycling, carbon sequestration, wildfire behavior, plant reproduction, and wildlife habitat. DDW was measured for the first time during a forest inventory of Maine by the USDA Forest Service in 1994-1996. Pieces greater than 3 feet long and greater than 3 inches in diameter at point of intersection...

  8. Literary Genres in Poetic Texts from the Dead Sea Scrolls

    Science.gov (United States)

    Pickut, William Douglas

    2017-01-01

    Among the texts of the Dead Sea Scrolls, there are four literary compositions that bear the superscriptional designations shir and mizmor. These designations correspond directly to superscriptional designations provided many times in both the now-canonical Psalter and the various witnesses to those texts unearthed at Qumran. On its face, this fact…

  9. Book Review Lifeblood: How to Change the World, One Dead ...

    African Journals Online (AJOL)

    Book Review Lifeblood: How to Change the World, One Dead Mosquito at a Time By Alex Perry (2011). Melissa Raemaekers. Abstract. Pp xiv + 219. R210. Picador Africa, Pan Macmillan, South Africa. 2011. ISBN 978-1-77010-146-3. February 2012, Vol. 102, No. 2 SAMJ. Full Text: EMAIL FREE FULL TEXT EMAIL FREE ...

  10. Decision of special monitoring time to minimize the difference of the committed effective dose evaluated from the different AMAD

    International Nuclear Information System (INIS)

    Lee, J. I.; Lee, T. Y.; Jang, S. Y.; Lee, J. K.

    2003-01-01

    The Committed Effective Doses (CEDs) per measured unit of activity in the bioassay compartments at any time (t) after an acute intake by the inhalation of a radionuclide with a different particle size (AMAD) were calculated and compared. As a result, the relative difference between the CEDs evaluated from the different AMAD is affected by the radionuclide, bioassay compartment, and the time (t) after intake. Therefore a special monitoring time to exclude or reduce the effect of AMAD was decided and presented in the evaluation for the CEDs following an acute intake by the inhalation of a radionuclide. If special monitoring is performed during this presented special time after intake, the relative difference of the evaluated CEDs resulted from AMAD can be excluded or reduced

  11. Impact of warm ischemia time on the change of split renal function after minimally invasive partial nephrectomy in Taiwanese patients

    Directory of Open Access Journals (Sweden)

    Hung-Keng Li

    2015-01-01

    Conclusion: SRF is more sensitive for postoperative follow-up than eGFR. Longer warm ischemia time is associated with poorer postoperative renal function. RPN is a safe and feasible alternative to LPN.

  12. Taxonomic minimalism.

    Science.gov (United States)

    Beattle, A J; Oliver, I

    1994-12-01

    Biological surveys are in increasing demand while taxonomic resources continue to decline. How much formal taxonomy is required to get the job done? The answer depends on the kind of job but it is possible that taxonomic minimalism, especially (1) the use of higher taxonomic ranks, (2) the use of morphospecies rather than species (as identified by Latin binomials), and (3) the involvement of taxonomic specialists only for training and verification, may offer advantages for biodiversity assessment, environmental monitoring and ecological research. As such, formal taxonomy remains central to the process of biological inventory and survey but resources may be allocated more efficiently. For example, if formal Identification is not required, resources may be concentrated on replication and increasing sample sizes. Taxonomic minimalism may also facilitate the inclusion in these activities of important but neglected groups, especially among the invertebrates, and perhaps even microorganisms. Copyright © 1994. Published by Elsevier Ltd.

  13. Quantification of Listeria monocytogenes in minimally processed leafy vegetables using a combined method based on enrichment and 16S rRNA real-time PCR.

    Science.gov (United States)

    Aparecida de Oliveira, Maria; Abeid Ribeiro, Eliana Guimarães; Morato Bergamini, Alzira Maria; Pereira De Martinis, Elaine Cristina

    2010-02-01

    Modern lifestyle markedly changed eating habits worldwide, with an increasing demand for ready-to-eat foods, such as minimally processed fruits and leafy greens. Packaging and storage conditions of those products may favor the growth of psychrotrophic bacteria, including the pathogen Listeria monocytogenes. In this work, minimally processed leafy vegetables samples (n = 162) from retail market from Ribeirão Preto, São Paulo, Brazil, were tested for the presence or absence of Listeria spp. by the immunoassay Listeria Rapid Test, Oxoid. Two L. monocytogenes positive and six artificially contaminated samples of minimally processed leafy vegetables were evaluated by the Most Probable Number (MPN) with detection by classical culture method and also culture method combined with real-time PCR (RTi-PCR) for 16S rRNA genes of L. monocytogenes. Positive MPN enrichment tubes were analyzed by RTi-PCR with primers specific for L. monocytogenes using the commercial preparation ABSOLUTE QPCR SYBR Green Mix (ABgene, UK). Real-time PCR assay presented good exclusivity and inclusivity results and no statistical significant difference was found in comparison with the conventional culture method (p < 0.05). Moreover, RTi-PCR was fast and easy to perform, with MPN results obtained in ca. 48 h for RTi-PCR in comparison to 7 days for conventional method.

  14. Integer batch scheduling problems for a single-machine with simultaneous effect of learning and forgetting to minimize total actual flow time

    Directory of Open Access Journals (Sweden)

    Rinto Yusriski

    2015-09-01

    Full Text Available This research discusses an integer batch scheduling problems for a single-machine with position-dependent batch processing time due to the simultaneous effect of learning and forgetting. The decision variables are the number of batches, batch sizes, and the sequence of the resulting batches. The objective is to minimize total actual flow time, defined as total interval time between the arrival times of parts in all respective batches and their common due date. There are two proposed algorithms to solve the problems. The first is developed by using the Integer Composition method, and it produces an optimal solution. Since the problems can be solved by the first algorithm in a worst-case time complexity O(n2n-1, this research proposes the second algorithm. It is a heuristic algorithm based on the Lagrange Relaxation method. Numerical experiments show that the heuristic algorithm gives outstanding results.

  15. An Integer Batch Scheduling Model for a Single Machine with Simultaneous Learning and Deterioration Effects to Minimize Total Actual Flow Time

    Science.gov (United States)

    Yusriski, R.; Sukoyo; Samadhi, T. M. A. A.; Halim, A. H.

    2016-02-01

    In the manufacturing industry, several identical parts can be processed in batches, and setup time is needed between two consecutive batches. Since the processing times of batches are not always fixed during a scheduling period due to learning and deterioration effects, this research deals with batch scheduling problems with simultaneous learning and deterioration effects. The objective is to minimize total actual flow time, defined as a time interval between the arrival of all parts at the shop and their common due date. The decision variables are the number of batches, integer batch sizes, and the sequence of the resulting batches. This research proposes a heuristic algorithm based on the Lagrange Relaxation. The effectiveness of the proposed algorithm is determined by comparing the resulting solutions of the algorithm to the respective optimal solution obtained from the enumeration method. Numerical experience results show that the average of difference among the solutions is 0.05%.

  16. An Integer Batch Scheduling Model for a Single Machine with Simultaneous Learning and Deterioration Effects to Minimize Total Actual Flow Time

    International Nuclear Information System (INIS)

    Yusriski, R; Sukoyo; Samadhi, T M A A; Halim, A H

    2016-01-01

    In the manufacturing industry, several identical parts can be processed in batches, and setup time is needed between two consecutive batches. Since the processing times of batches are not always fixed during a scheduling period due to learning and deterioration effects, this research deals with batch scheduling problems with simultaneous learning and deterioration effects. The objective is to minimize total actual flow time, defined as a time interval between the arrival of all parts at the shop and their common due date. The decision variables are the number of batches, integer batch sizes, and the sequence of the resulting batches. This research proposes a heuristic algorithm based on the Lagrange Relaxation. The effectiveness of the proposed algorithm is determined by comparing the resulting solutions of the algorithm to the respective optimal solution obtained from the enumeration method. Numerical experience results show that the average of difference among the solutions is 0.05%. (paper)

  17. Real-time detection system for tumor localization during minimally invasive surgery for gastric and colon cancer removal: In vivo feasibility study in a swine model.

    Science.gov (United States)

    Choi, Won Jung; Moon, Jin-Hee; Min, Jae Seok; Song, Yong Keun; Lee, Seung A; Ahn, Jin Woo; Lee, Sang Hun; Jung, Ha Chul

    2018-03-01

    During minimally invasive surgery (MIS), it is impossible to directly detect marked clips around tumors via palpation. Therefore, we developed a novel method and device using Radio Frequency IDentification (RFID) technology to detect the position of clips during minimally invasive gastrectomy or colectomy. The feasibility of the RFID-based detection system was evaluated in an animal experiment consisting of seven swine. The primary outcome was to successfully detect the location of RFID clips in the stomach and colon. The secondary outcome measures were to detect time (time during the intracorporeal detection of the RFID clip), and accuracy (distance between the RFID clip and the detected site). A total of 25 detection attempts (14 in the stomach and 11 in the colon) using the RFID antenna had a 100% success rate. The median detection time was 32.5 s (range, 15-119 s) for the stomach and 28.0 s (range, 8-87 s) for the colon. The median detection distance was 6.5 mm (range, 4-18 mm) for the stomach and 6.0 mm (range, 3-13 mm) for the colon. We demonstrated favorable results for a RFID system that detects the position of gastric and colon tumors in real-time during MIS. © 2017 Wiley Periodicals, Inc.

  18. Development of a minimization instrument for allocation of a hospital-level performance improvement intervention to reduce waiting times in Ontario emergency departments

    Directory of Open Access Journals (Sweden)

    Anderson Geoff

    2009-06-01

    Full Text Available Abstract Background Rigorous evaluation of an intervention requires that its allocation be unbiased with respect to confounders; this is especially difficult in complex, system-wide healthcare interventions. We developed a short survey instrument to identify factors for a minimization algorithm for the allocation of a hospital-level intervention to reduce emergency department (ED waiting times in Ontario, Canada. Methods Potential confounders influencing the intervention's success were identified by literature review, and grouped by healthcare setting specific change stages. An international multi-disciplinary (clinical, administrative, decision maker, management panel evaluated these factors in a two-stage modified-delphi and nominal group process based on four domains: change readiness, evidence base, face validity, and clarity of definition. Results An original set of 33 factors were identified from the literature. The panel reduced the list to 12 in the first round survey. In the second survey, experts scored each factor according to the four domains; summary scores and consensus discussion resulted in the final selection and measurement of four hospital-level factors to be used in the minimization algorithm: improved patient flow as a hospital's leadership priority; physicians' receptiveness to organizational change; efficiency of bed management; and physician incentives supporting the change goal. Conclusion We developed a simple tool designed to gather data from senior hospital administrators on factors likely to affect the success of a hospital patient flow improvement intervention. A minimization algorithm will ensure balanced allocation of the intervention with respect to these factors in study hospitals.

  19. Development of a minimization instrument for allocation of a hospital-level performance improvement intervention to reduce waiting times in Ontario emergency departments.

    Science.gov (United States)

    Leaver, Chad Andrew; Guttmann, Astrid; Zwarenstein, Merrick; Rowe, Brian H; Anderson, Geoff; Stukel, Therese; Golden, Brian; Bell, Robert; Morra, Dante; Abrams, Howard; Schull, Michael J

    2009-06-08

    Rigorous evaluation of an intervention requires that its allocation be unbiased with respect to confounders; this is especially difficult in complex, system-wide healthcare interventions. We developed a short survey instrument to identify factors for a minimization algorithm for the allocation of a hospital-level intervention to reduce emergency department (ED) waiting times in Ontario, Canada. Potential confounders influencing the intervention's success were identified by literature review, and grouped by healthcare setting specific change stages. An international multi-disciplinary (clinical, administrative, decision maker, management) panel evaluated these factors in a two-stage modified-delphi and nominal group process based on four domains: change readiness, evidence base, face validity, and clarity of definition. An original set of 33 factors were identified from the literature. The panel reduced the list to 12 in the first round survey. In the second survey, experts scored each factor according to the four domains; summary scores and consensus discussion resulted in the final selection and measurement of four hospital-level factors to be used in the minimization algorithm: improved patient flow as a hospital's leadership priority; physicians' receptiveness to organizational change; efficiency of bed management; and physician incentives supporting the change goal. We developed a simple tool designed to gather data from senior hospital administrators on factors likely to affect the success of a hospital patient flow improvement intervention. A minimization algorithm will ensure balanced allocation of the intervention with respect to these factors in study hospitals.

  20. Raising the Dead without a Red Sea-Dead Sea project? Hydro-economics and governance

    Directory of Open Access Journals (Sweden)

    D. E. Rosenberg

    2011-04-01

    Full Text Available Seven decades of extractions have dramatically reduced Jordan River flows, lowered the Dead Sea level, opened sink holes, and caused other environmental problems. The fix Jordan, Israel, and the Palestinians propose would build an expensive multipurpose conveyance project from the Red Sea to the Dead Sea that would also generate hydropower and desalinate water. This paper compares the Red-Dead project to alternatives that may also raise the Dead Sea level. Hydro-economic model results for the Jordan-Israel-Palestinian inter-tied water systems show two restoration alternatives are more economically viable than the proposed Red-Dead project. Many decentralized new supply, wastewater reuse, conveyance, conservation, and leak reduction projects and programs in each country can together increase economic benefits and reliably deliver up to 900 MCM yr−1 to the Dead Sea. Similarly, a smaller Red-Dead project that only generates hydropower can deliver large flows to the Dead Sea when the sale price of generated electricity is sufficiently high. However, for all restoration options, net benefits fall and water scarcity rises as flows to the Dead Sea increase. This finding suggests (i each country has no individual incentive to return water to the Dead Sea, and (ii outside institutions that seek to raise the Dead must also offer countries direct incentives to deliver water to the Sea besides building the countries new infrastructure.

  1. A physics-based algorithm for real-time simulation of electrosurgery procedures in minimally invasive surgery.

    Science.gov (United States)

    Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu

    2014-12-01

    High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Evaluating the coefficient of thermal expansion using time periods of minimal thermal gradient for a temperature driven structural health monitoring

    Science.gov (United States)

    Reilly, J.; Abdel-Jaber, H.; Yarnold, M.; Glisic, B.

    2017-04-01

    Structural Health Monitoring aims to characterize the performance of a structure from a combination of recorded sensor data and analytic techniques. Many methods are concerned with quantifying the elastic response of the structure, treating temperature changes as noise in the analysis. While these elastic profiles do demonstrate a portion of structural behavior, thermal loads on a structure can induce comparable strains to elastic loads. Understanding this relationship between the temperature of the structure and the resultant strain and displacement can provide in depth knowledge of the structural condition. A necessary parameter for this form of analysis is the Coefficient of Thermal Expansion (CTE). The CTE of a material relates the amount of expansion or contraction a material undergoes per degree change in temperature, and can be determined from temperature-strain relationship given that the thermal strain can be isolated. Many times with concrete, the actual amount of expansion with temperature in situ varies from the given values for the CTE due to thermally generated elastic strain, which complicates evaluation of the CTE. To accurately characterize the relationship between temperature and strain on a structure, the actual thermal behavior of the structure needs to be analyzed. This rate can vary for different parts of a structure, depending on boundary conditions. In a case of unrestrained structures, the strain in the structure should be linearly related to the temperature change. Thermal gradients in a structure can affect this relationship, as they induce curvature and deplanations in the cross section. This paper proposes a method that addresses these challenges in evaluating the CTE.

  3. Dead Zones in LX-17 and PBX 9502

    Energy Technology Data Exchange (ETDEWEB)

    Souers, P C; Andreski, H G; Batteux, J; Bratton, B; Cabacungan, C; Cook, III, C F; Fletcher, S; Garza, R; Grimsley, D; Handly, J; Hernandez, A; McMaster, P; Molitoris, J D; Palmer, R; Prindiville, J; Rodriguez, J; Schneberk, D; Wong, B; Vitello, P

    2005-09-06

    Pin and X-ray corner-turning data have been taken on ambient LX-17 and PBX 9052, and the results are listed in tables as an aid to future modeling. The results have been modeled at 4 zones/mm with a reactive flow approach that varies the burn rate as a function of pressure. A single rate format is used to simulate failure and detonation in different pressure regimes. A pressure cut-off must also be reached to initiate the burn. Corner-turning and failure are modeled using an intermediate pressure rate region, and detonation occurs at high pressure. The TATB booster is also modeled using reactive flow, and X-ray tomography is used to partition the ram-pressed hemisphere into five different density regions. The model reasonably fits the bare corner-turning experiment but predicts a smaller dead zone with steel confinement, in contradiction with experiment. The same model also calculates the confined and unconfined cylinder detonation velocities and predicts the failure of the unconfined cylinder at 3.75 mm radius. The PBX 9502 shows a smaller dead zone than LX-17. An old experiment that showed a large apparent dead zone in Comp B was repeated with X-ray transmission and no dead zone was seen. This confirms the idea that a variable burn rate is the key to modeling. The model also produces initiation delays, which are shorter than those found in time-to-detonation.

  4. Dynamic optimization of a dead-end filtration trajectory: non-ideal cake filtration

    NARCIS (Netherlands)

    Blankert, B.; Kattenbelt, C.; Betlem, Bernardus H.L.; Roffel, B.

    2007-01-01

    A control strategy aimed at minimizing energy consumption is formulated for non-ideal dead-end cake filtration with an inside-out hollow fiber ultrafiltration membrane system. The non-ideal behavior was assumed to originate from cake compression, non-linear cake resistance and a variable pump

  5. Dead pixel replacement in LWIR microgrid polarimeters.

    Science.gov (United States)

    Ratliff, Bradley M; Tyo, J Scott; Boger, James K; Black, Wiley T; Bowers, David L; Fetrow, Matthew P

    2007-06-11

    LWIR imaging arrays are often affected by nonresponsive pixels, or "dead pixels." These dead pixels can severely degrade the quality of imagery and often have to be replaced before subsequent image processing and display of the imagery data. For LWIR arrays that are integrated with arrays of micropolarizers, the problem of dead pixels is amplified. Conventional dead pixel replacement (DPR) strategies cannot be employed since neighboring pixels are of different polarizations. In this paper we present two DPR schemes. The first is a modified nearest-neighbor replacement method. The second is a method based on redundancy in the polarization measurements.We find that the redundancy-based DPR scheme provides an order-of-magnitude better performance for typical LWIR polarimetric data.

  6. Water quality modeling in the dead end sections of drinking water (Supplement)

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to...

  7. The continuous reaction time test for minimal hepatic encephalopathy validated by a randomized controlled multi-modal intervention-A pilot study

    DEFF Research Database (Denmark)

    Lauridsen, M M; Mikkelsen, S; Svensson, T

    2017-01-01

    Background: Minimal hepatic encephalopathy (MHE) is clinically undetectable and the diagnosis requires psychometric tests. However, a lack of clarity exists as to whether the tests are in fact able to detect changes in cognition. Aim: To examine if the continuous reaction time test (CRT) can detect...... changes in cognition with anti-HE intervention in patients with cirrhosis and without clinically manifest hepatic encephalopathy (HE). Methods: Firstly, we conducted a reproducibility analysis and secondly measured change in CRT induced by anti-HE treatment in a randomized controlled pilot study: We...... stratified 44 patients with liver cirrhosis and without clinically manifest HE according to a normal (n = 22) or abnormal (n = 22) CRT. Each stratum was then block randomized to receive multimodal anti-HE intervention (lactulose+branched-chain amino acids+rifaximin) or triple placebos for 3 months...

  8. An Enhanced Discrete Artificial Bee Colony Algorithm to Minimize the Total Flow Time in Permutation Flow Shop Scheduling with Limited Buffers

    Directory of Open Access Journals (Sweden)

    Guanlong Deng

    2016-01-01

    Full Text Available This paper presents an enhanced discrete artificial bee colony algorithm for minimizing the total flow time in the flow shop scheduling problem with buffer capacity. First, the solution in the algorithm is represented as discrete job permutation to directly convert to active schedule. Then, we present a simple and effective scheme called best insertion for the employed bee and onlooker bee and introduce a combined local search exploring both insertion and swap neighborhood. To validate the performance of the presented algorithm, a computational campaign is carried out on the Taillard benchmark instances, and computations and comparisons show that the proposed algorithm is not only capable of solving the benchmark set better than the existing discrete differential evolution algorithm and iterated greedy algorithm, but also capable of performing better than two recently proposed discrete artificial bee colony algorithms.

  9. The Influence Of Dead Layer Effect On The Characteristics Of The High Purity Germanium P-Type Detector

    International Nuclear Information System (INIS)

    Ngo Quang Huy

    2011-01-01

    The present work aims at reviewing the studies of the influence of dead layer effect on the characteristics of a high purity germanium (HPGe) p-type detector, obtained by the author and his colleagues in the recent years. The object for study was the HPGe GC1518 detector-based gamma spectrometer of the Center for Nuclear Techniques, Ho Chi Minh City. The studying problems were: The modeling of an HPGe detector-based gamma spectrometer with using the MCNP code; the method of determining the thickness of dead layer by experimental measurements of gamma spectra and the calculations using MCNP code; the influence of material parameters and dead layer on detector efficiency; the increase of dead layer thickness over the operating time of the GC1518 detector; the influence of dead layer thickness increase on the decrease of detector efficiency; the dead layer effect for the gamma spectra measured in the GC1518 detector. (author)

  10. 9 CFR 314.8 - Dead animal carcasses.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Dead animal carcasses. 314.8 Section... Dead animal carcasses. (a) With the exception of dead livestock which have died en route and are received with livestock for slaughter at an official establishment, no dead animal or part of the carcass...

  11. Minimizing the Makespan for a Two-Stage Three-Machine Assembly Flow Shop Problem with the Sum-of-Processing-Time Based Learning Effect

    Directory of Open Access Journals (Sweden)

    Win-Chin Lin

    2018-01-01

    Full Text Available Two-stage production process and its applications appear in many production environments. Job processing times are usually assumed to be constant throughout the process. In fact, the learning effect accrued from repetitive work experiences, which leads to the reduction of actual job processing times, indeed exists in many production environments. However, the issue of learning effect is rarely addressed in solving a two-stage assembly scheduling problem. Motivated by this observation, the author studies a two-stage three-machine assembly flow shop problem with a learning effect based on sum of the processing times of already processed jobs to minimize the makespan criterion. Because this problem is proved to be NP-hard, a branch-and-bound method embedded with some developed dominance propositions and a lower bound is employed to search for optimal solutions. A cloud theory-based simulated annealing (CSA algorithm and an iterated greedy (IG algorithm with four different local search methods are used to find near-optimal solutions for small and large number of jobs. The performances of adopted algorithms are subsequently compared through computational experiments and nonparametric statistical analyses, including the Kruskal–Wallis test and a multiple comparison procedure.

  12. Iterated greedy algorithms to minimize the total family flow time for job-shop scheduling with job families and sequence-dependent set-ups

    Science.gov (United States)

    Kim, Ji-Su; Park, Jung-Hyeon; Lee, Dong-Ho

    2017-10-01

    This study addresses a variant of job-shop scheduling in which jobs are grouped into job families, but they are processed individually. The problem can be found in various industrial systems, especially in reprocessing shops of remanufacturing systems. If the reprocessing shop is a job-shop type and has the component-matching requirements, it can be regarded as a job shop with job families since the components of a product constitute a job family. In particular, sequence-dependent set-ups in which set-up time depends on the job just completed and the next job to be processed are also considered. The objective is to minimize the total family flow time, i.e. the maximum among the completion times of the jobs within a job family. A mixed-integer programming model is developed and two iterated greedy algorithms with different local search methods are proposed. Computational experiments were conducted on modified benchmark instances and the results are reported.

  13. Standardization and performance evaluation of "modified" and "ultrasensitive" versions of the Abbott RealTime HIV-1 assay, adapted to quantify minimal residual viremia.

    Science.gov (United States)

    Amendola, Alessandra; Bloisi, Maria; Marsella, Patrizia; Sabatini, Rosella; Bibbò, Angela; Angeletti, Claudio; Capobianchi, Maria Rosaria

    2011-09-01

    Numerous studies investigating clinical significance of HIV-1 minimal residual viremia (MRV) suggest potential utility of assays more sensitive than those routinely used to monitor viral suppression. However currently available methods, based on different technologies, show great variation in detection limit and input plasma volume, and generally suffer from lack of standardization. In order to establish new tools suitable for routine quantification of minimal residual viremia in patients under virological suppression, some modifications were introduced into standard procedure of the Abbott RealTime HIV-1 assay leading to a "modified" and an "ultrasensitive" protocols. The following modifications were introduced: calibration curve extended towards low HIV-1 RNA concentration; 4 fold increased sample volume by concentrating starting material; reduced volume of internal control; adoption of "open-mode" software for quantification. Analytical performances were evaluated using the HIV-1 RNA Working Reagent 1 for NAT assays (NIBSC). Both tests were applied to clinical samples from virologically suppressed patients. The "modified" and the "ultrasensitive" configurations of the assay reached a limit of detection of 18.8 (95% CI: 11.1-51.0 cp/mL) and 4.8 cp/mL (95% CI: 2.6-9.1 cp/mL), respectively, with high precision and accuracy. In clinical samples from virologically suppressed patients, "modified" and "ultrasensitive" protocols allowed to detect and quantify HIV RNA in 12.7% and 46.6%, respectively, of samples resulted "not-detectable", and in 70.0% and 69.5%, respectively, of samples "detected laboratories for measuring MRV. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Objective measurement of minimal fat in normal skeletal muscles of healthy children using T2 relaxation time mapping (T2 maps) and MR spectroscopy.

    Science.gov (United States)

    Kim, Hee Kyung; Serai, Suraj; Merrow, Arnold C; Wang, Lily; Horn, Paul S; Laor, Tal

    2014-02-01

    Various skeletal muscle diseases result in fatty infiltration, making it important to develop noninvasive biomarkers to objectively measure muscular fat. We compared T2 relaxation time mapping (T2 maps) and magnetic resonance spectroscopy (MRS) with physical characteristics previously correlated with intramuscular fat to validate T2 maps and MRS as objective measures of skeletal muscle fat. We evaluated gluteus maximus muscles in 30 healthy boys (ages 5-19 years) at 3 T with T1-weighted images, T2-W images with fat saturation, T2 maps with and without fat saturation, and MR spectroscopy. We calculated body surface area (BSA), body mass index (BMI) and BMI percentile (BMI %). We performed fat and inflammation grading on T1-W imaging and fat-saturated T2-W imaging, respectively. Mean T2 values from T2 maps with fat saturation were subtracted from T2 maps without fat saturation to determine T2 fat values. We obtained lipid-to-water ratios by MR spectroscopy. Pearson correlation was used to assess relationships between BSA, BMI, BMI %, T2 fat values, and lipid-to-water ratios for each boy. Twenty-four boys completed all exams; 21 showed minimal and 3 showed no fatty infiltration. None showed muscle inflammation. There was correlation between BSA, BMI, and BMI %, and T2 fat values (P values and lipid-to-water ratios (P skeletal muscles, even in microscopic amounts, and validate each other. Both techniques might enable detection of minimal pathological fatty infiltration in children with skeletal muscle disorders.

  15. Page Kidney in Wunderlich Syndrome Causing Acute Renal Failure and Urosepsis: Successful Timely Minimally Invasive Management of a Devastating Clinical Entity.

    Science.gov (United States)

    Vijayganapathy, Sundaramoorthy; Karthikeyan, Vilvapathy Senguttuvan; Mallya, Ashwin; Sreenivas, Jayaram

    2017-06-01

    Wunderlich Syndrome (WS) is an uncommon condition where acute onset of spontaneous bleeding occurs into the subcapsular and perirenal spaces. It can prove fatal if not recognized and treated aggressively at the appropriate time. A 32-year-old male diagnosed elsewhere as acute renal failure presented with tender left loin mass, fever and hypovolemic shock with serum creatinine 8.4 mg/dl. He was started on higher antibiotics and initiated on haemodialysis. Ultrasonogram (USG), Non-Contrast Computed Tomography (NCCT) and Magnetic Resonance Imaging (MRI) showed bilateral perirenal subcapsular haematomas - right 3.6 x 3.1 cm and left 10.3 x 10.3 cm compressing and displacing left kidney, fed by capsular branch of left renal artery on CT angiogram. Initial aspirate was bloody but he persisted to have febrile spikes, renal failure and urosepsis and he was managed conservatively. Repeat NCCT 10 days later revealed left perinephric abscess and Percutaneous Drainage (PCD) was done. Patient improved, serum creatinine stabilized at 2 mg/dl without haemodialysis and PCD was removed after two weeks. To conclude, bilateral idiopathic spontaneous retroperitoneal haemorrhage with renal failure is a rare presentation. This case highlights the need for high index of suspicion, the role of repeated imaging and successful minimally invasive management with timely PCD and supportive care.

  16. Minimal recovery time needed to return to social engagement following nasolabial fold correction with hyaluronic acid fillers produced with XpresHAn technology.

    Science.gov (United States)

    Swift, Arthur; von Grote, Erika; Jonas, Brandie; Nogueira, Alessandra

    2017-01-01

    The appeal of hyaluronic acid fillers for facial soft tissue augmentation is attributable to both an immediate aesthetic effect and relatively short recovery time. Although recovery time is an important posttreatment variable, as it impacts comfort with appearance and perceived treatment benefit, it is not routinely evaluated. Natural-looking aesthetic outcomes are also a primary concern for many patients. A single-center, noncomparative study evaluated the time (in hours) until subjects return to social engagement (RtSE) following correction of moderate and severe nasolabial folds (NLFs) with R R (Restylane ® Refyne) ® and R D (Restylane Defyne), respectively. Twenty subjects (aged 35-57 years) who received bilateral NLF correction documented their RtSE and injection-related events posttreatment. Treatment efficacy was evaluated by improvements in Wrinkle Severity Rating Scale (WSRS) and subject satisfaction questionnaire at days 14 and 30, and by Global Aesthetic Improvement Scale (GAIS) at day 30. Safety was evaluated by injection-related events and treatment-emergent adverse events. Fifty percent of subjects reported RtSE within 2 hours posttreatment. WSRS for the R R group improved significantly from baseline at day 14 (-1.45±0.42) and day 30 (-1.68±0.46) ( P experienced 3 related treatment-emergent adverse events; 1 R R subject experienced severe bruising, and 1 R D subject experienced severe erythema and mild telangiectasia. Subject satisfaction was high regarding aesthetic outcomes and natural-looking results. Optimal correction of moderate NLFs with R R and severe NLFs with R D involved minimal time to RtSE for most subjects. Treatments that significantly improved WSRS and GAIS, were generally well-tolerated, and provided natural-looking aesthetic outcomes.

  17. Pulmonary Dead Space Fraction and Extubation Success in Children After Cardiac Surgery.

    Science.gov (United States)

    Devor, Renee L; Kang, Paul; Wellnitz, Chasity; Nigro, John J; Velez, Daniel A; Willis, Brigham C

    2018-04-01

    1) Determine the correlation between pulmonary dead space fraction and extubation success in postoperative pediatric cardiac patients; and 2) document the natural history of pulmonary dead space fractions, dynamic compliance, and airway resistance during the first 72 hours postoperatively in postoperative pediatric cardiac patients. A retrospective chart review. Cardiac ICU in a quaternary care free-standing children's hospital. Twenty-nine with balanced single ventricle physiology, 61 with two ventricle physiology. None. We collected data for all pediatric patients undergoing congenital cardiac surgery over a 14-month period during the first 72 hours postoperatively as well as prior to extubation. Overall, patients with successful extubations had lower preextubation dead space fractions and shorter lengths of stay. Single ventricle patients had higher initial postoperative and preextubation dead space fractions. Two-ventricle physiology patients had higher extubation failure rates if the preextubation dead space fraction was greater than 0.5, whereas single ventricle patients had similar extubation failure rates whether preextubation dead space fractions were less than or equal to 0.5 or greater than 0.5. Additionally, increasing initial dead space fraction values predicted prolonged mechanical ventilation times. Airway resistance and dynamic compliance were similar between those with successful extubations and those who failed. Initial postoperative dead space fraction correlates with the length of mechanical ventilation in two ventricle patients but not in single ventricle patients. Lower preextubation dead space fractions are a strong predictor of successful extubation in two ventricle patients after cardiac surgery, but may not be as useful in single ventricle patients.

  18. Virtually Dead: Digital Public Mortuary Archaeology

    Directory of Open Access Journals (Sweden)

    Howard Williams

    2015-12-01

    Full Text Available Over recent decades, the ethics, politics and public engagements of mortuary archaeology have received sustained scrutiny, including how we handle, write about and display the archaeological dead. Yet the burgeoning use of digital media to engage different audiences in the archaeology of death and burial have so far escaped attention. This article explores categories and strategies by which digital media create virtual communities engaging with mortuary archaeology. Considering digital public mortuary archaeology (DPMA as a distinctive theme linking archaeology, mortality and material culture, we discuss blogs, vlogs and Twitter as case studies to illustrate the variety of strategies by which digital media can promote, educate and engage public audiences with archaeological projects and research relating to death and the dead in the human past. The article then explores a selection of key critical concerns regarding how the digital dead are currently portrayed, identifying the need for further investigation and critical reflection on DPMA’s aims, objectives and aspired outcomes.

  19. Dead sea transform fault system reviews

    CERN Document Server

    Garfunkel, Zvi; Kagan, Elisa

    2014-01-01

    The Dead Sea transform is an active plate boundary connecting the Red Sea seafloor spreading system to the Arabian-Eurasian continental collision zone. Its geology and geophysics provide a natural laboratory for investigation of the surficial, crustal and mantle processes occurring along transtensional and transpressional transform fault domains on a lithospheric scale and related to continental breakup. There have been many detailed and disciplinary studies of the Dead Sea transform fault zone during the last?20 years and this book brings them together.This book is an updated comprehensive coverage of the knowledge, based on recent studies of the tectonics, structure, geophysics, volcanism, active tectonics, sedimentology and paleo and modern climate of the Dead Sea transform fault zone. It puts together all this new information and knowledge in a coherent fashion.

  20. [Spatial pattern of land surface dead combustible fuel load in Huzhong forest area in Great Xing'an Mountains].

    Science.gov (United States)

    Liu, Zhi-Hua; Chang, Yu; Chen, Hong-Wei; Zhou, Rui; Jing, Guo-Zhi; Zhang, Hong-Xin; Zhang, Chang-Meng

    2008-03-01

    By using geo-statistics and based on time-lag classification standard, a comparative study was made on the land surface dead combustible fuels in Huzhong forest area in Great Xing'an Mountains. The results indicated that the first level land surface dead combustible fuel, i. e., 1 h time-lag dead fuel, presented stronger spatial auto-correlation, with an average of 762.35 g x m(-2) and contributing to 55.54% of the total load. Its determining factors were species composition and stand age. The second and third levels land surface dead combustible fuel, i. e., 10 h and 100 h time-lag dead fuels, had a sum of 610.26 g x m(-2), and presented weaker spatial auto-correlation than 1 h time-lag dead fuel. Their determining factor was the disturbance history of forest stand. The complexity and heterogeneity of the factors determining the quality and quantity of forest land surface dead combustible fuels were the main reasons for the relatively inaccurate interpolation. However, the utilization of field survey data coupled with geo-statistics could easily and accurately interpolate the spatial pattern of forest land surface dead combustible fuel loads, and indirectly provide a practical basis for forest management.

  1. EZH2 and CD79B mutational status over time in B-cell non-Hodgkin lymphomas detected by high-throughput sequencing using minimal samples

    Science.gov (United States)

    Saieg, Mauro Ajaj; Geddie, William R; Boerner, Scott L; Bailey, Denis; Crump, Michael; da Cunha Santos, Gilda

    2013-01-01

    BACKGROUND: Numerous genomic abnormalities in B-cell non-Hodgkin lymphomas (NHLs) have been revealed by novel high-throughput technologies, including recurrent mutations in EZH2 (enhancer of zeste homolog 2) and CD79B (B cell antigen receptor complex-associated protein beta chain) genes. This study sought to determine the evolution of the mutational status of EZH2 and CD79B over time in different samples from the same patient in a cohort of B-cell NHLs, through use of a customized multiplex mutation assay. METHODS: DNA that was extracted from cytological material stored on FTA cards as well as from additional specimens, including archived frozen and formalin-fixed histological specimens, archived stained smears, and cytospin preparations, were submitted to a multiplex mutation assay specifically designed for the detection of point mutations involving EZH2 and CD79B, using MassARRAY spectrometry followed by Sanger sequencing. RESULTS: All 121 samples from 80 B-cell NHL cases were successfully analyzed. Mutations in EZH2 (Y646) and CD79B (Y196) were detected in 13.2% and 8% of the samples, respectively, almost exclusively in follicular lymphomas and diffuse large B-cell lymphomas. In one-third of the positive cases, a wild type was detected in a different sample from the same patient during follow-up. CONCLUSIONS: Testing multiple minimal tissue samples using a high-throughput multiplex platform exponentially increases tissue availability for molecular analysis and might facilitate future studies of tumor progression and the related molecular events. Mutational status of EZH2 and CD79B may vary in B-cell NHL samples over time and support the concept that individualized therapy should be based on molecular findings at the time of treatment, rather than on results obtained from previous specimens. Cancer (Cancer Cytopathol) 2013;121:377–386. © 2013 American Cancer Society. PMID:23361872

  2. Novel Interactive Data Visualization: Exploration of the ESCAPE Trial (Endovascular Treatment for Small Core and Anterior Circulation Proximal Occlusion With Emphasis on Minimizing CT to Recanalization Times) Data.

    Science.gov (United States)

    Brigdan, Matthew; Hill, Michael D; Jagdev, Abhijeet; Kamal, Noreen

    2018-01-01

    The ESCAPE (Endovascular Treatment for Small Core and Anterior Circulation Proximal Occlusion With Emphasis on Minimizing CT to Recanalization Times) randomized clinical trial collected a large diverse data set. However, it is difficult to fully understand the effects of the study on certain patient groups and disease progression. We developed and evaluated an interactive visualization of the ESCAPE trial data. We iteratively designed an interactive visualization using Python's Bokeh software library. The design was evaluated through a user study, which quantitatively evaluated its efficiency and accuracy against traditional modified Rankin Scalegraphic. Qualitative feedback was also evaluated. The novel interactive visualization of the ESCAPE data are publicly available at http://escapevisualization.herokuapp.com/. There was no difference in the efficiency and accuracy when comparing the use of the novel with the traditional visualization. However, users preferred the novel visualization because it allowed for greater exploration. Some insights obtained through exploration of the ESCAPE data are presented. Novel interactive visualizations can be applied to acute stroke trial data to allow for greater exploration of the results. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01778335. © 2017 American Heart Association, Inc.

  3. Analysis of minimal residual disease by Ig/TCR gene rearrangements: guidelines for interpretation of real-time quantitative PCR data

    NARCIS (Netherlands)

    van der Velden, V. H. J.; Cazzaniga, G.; Schrauder, A.; Hancock, J.; Bader, P.; Panzer-Grumayer, E. R.; Flohr, T.; Sutton, R.; Cave, H.; Madsen, H. O.; Cayuela, J. M.; Trka, J.; Eckert, C.; Foroni, L.; Zur Stadt, U.; Beldjord, K.; Raff, T.; van der Schoot, C. E.; van Dongen, J. J. M.

    2007-01-01

    Most modern treatment protocols for acute lymphoblastic leukaemia (ALL) include the analysis of minimal residual disease (MRD). To ensure comparable MRD results between different MRD-polymerase chain reaction (PCR) laboratories, standardization and quality control are essential. The European Study

  4. High Throughput qPCR Expression Profiling of Circulating MicroRNAs Reveals Minimal Sex- and Sample Timing-Related Variation in Plasma of Healthy Volunteers.

    Directory of Open Access Journals (Sweden)

    Catherine Mooney

    Full Text Available MicroRNAs are a class of small non-coding RNA that regulate gene expression at a post-transcriptional level. MicroRNAs have been identified in various body fluids under normal conditions and their stability as well as their dysregulation in disease opens up a new field for biomarker study. However, diurnal and day-to-day variation in plasma microRNA levels, and differential regulation between males and females, may affect biomarker stability. A QuantStudio 12K Flex Real-Time PCR System was used to profile plasma microRNA levels using OpenArray in male and female healthy volunteers, in the morning and afternoon, and at four time points over a one month period. Using this system we were able to run four OpenArray plates in a single run, the equivalent of 32 traditional 384-well qPCR plates or 12,000 data points. Up to 754 microRNAs can be identified in a single plasma sample in under two hours. 108 individual microRNAs were identified in at least 80% of all our samples which compares favourably with other reports of microRNA profiles in serum or plasma in healthy adults. Many of these microRNAs, including miR-16-5p, miR-17-5p, miR-19a-3p, miR-24-3p, miR-30c-5p, miR-191-5p, miR-223-3p and miR-451a are highly expressed and consistent with previous studies using other platforms. Overall, microRNA levels were very consistent between individuals, males and females, and time points and we did not detect significant differences in levels of microRNAs. These results suggest the suitability of this platform for microRNA profiling and biomarker discovery and suggest minimal confounding influence of sex or sample timing. However, the platform has not been subjected to rigorous validation which must be demonstrated in future biomarker studies where large differences may exist between disease and control samples.

  5. 32 CFR 632.4 - Deadly force.

    Science.gov (United States)

    2010-07-01

    ..., is substantially important to national security. (See paragraph (b) of this section.) (iii) Escape of... security or an essential national defense mission. (2) Substantially important to national security based... INVESTIGATIONS USE OF FORCE BY PERSONNEL ENGAGED IN LAW ENFORCEMENT AND SECURITY DUTIES § 632.4 Deadly force. (a...

  6. Unethical and Deadly Symbiosis in Higher Education

    Science.gov (United States)

    Crumbley, D. Larry; Flinn, Ronald; Reichelt, Kenneth J.

    2012-01-01

    As administrators are pressured to increase retention rates in accounting departments, and higher education in general, a deadly symbiosis is occurring. Most students and parents only wish for high grades, so year after year many educators engage in unethical grade inflation and course work deflation. Since administrators use the students to audit…

  7. Exercising is like flogging a dead horse

    International Nuclear Information System (INIS)

    Molhoek, W.

    2003-01-01

    - FR (NPP Gravelines) was conducted (22-23th of May 2001). The main objectives of the INEX series of exercises were focused on: decision making based on limited information and uncertain plant conditions; the use of real time communications with actual equipment and procedures; public information and interaction with media; the use of real weather for real time forecasts. For real further improvement of (nuclear) emergency management and response, not only national and international exercises such as INEX, CONVEX, JINEX etc. are needed, but the efforts to improve personal performance of key-persons involved is crucial. Structural plans to train and exercise individuals and teams should therefore be developed as well. To move the dead horse and create a racehorse need a lot of personal skills and attention. It is also recognized that often the carrot is better than the whip. (author)

  8. Impact of the occurrence of a response shift on the determination of the minimal important difference in a health-related quality of life score over time.

    Science.gov (United States)

    Ousmen, Ahmad; Conroy, Thierry; Guillemin, Francis; Velten, Michel; Jolly, Damien; Mercier, Mariette; Causeret, Sylvain; Cuisenier, Jean; Graesslin, Olivier; Hamidou, Zeinab; Bonnetain, Franck; Anota, Amélie

    2016-12-03

    An important challenge of the longitudinal analysis of health-related quality of life (HRQOL) is the potential occurrence of a Response Shift (RS) effect. While the impact of RS effect on the longitudinal analysis of HRQOL has already been studied, few studies have been conducted on its impact on the determination of the Minimal Important Difference (MID). This study aims to investigate the impact of the RS effect on the determination of the MID over time for each scale of both EORTC QLQ-C30 and QLQ-BR23 questionnaires in breast cancer patients. Patients with breast cancer completed the EORTC QLQ-C30 and the EORTC QLQ-BR23 questionnaires at baseline (time of diagnosis; T0), three months (T1) and six months after surgery (T2). Four hospitals and care centers participated in this study: cancer centers of Dijon and Nancy, the university hospitals of Reims and Strasbourg At T1 and T2, patients were asked to evaluate their HRQOL change during the last 3 months using the Jaeschke transition question. They were also asked to assess retrospectively their HRQOL level of three months ago. The occurrence of the RS effect was explored using the then-test method and its impact on the determination of the MID by using the Anchor-based method. Between February 2006 and February 2008, 381 patients were included of mean age 58 years old (SD = 11). For patients who reported a deterioration of their HRQOL level at each follow-up, an increase of RS effect has been detected between T1 and T2 in 13/15 dimensions of QLQ-C30 questionnaire, and 4/7 dimensions of QLQ-BR23 questionnaire. In contrast, a decrease of the RS effect was observed in 8/15 dimensions of QLQ-C30 questionnaire and in 5/7 dimensions of QLQ-BR23 questionnaire in case of improvement. At T2, the MID became ≥ 5 points when taking into account the RS effect in 10/15 dimensions of QLQ-C30 questionnaire and in 5/7 dimensions of QLQ-BR23 questionnaire. This study highlights that the RS effect increases over time in

  9. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    Science.gov (United States)

    Qiao, Qin; Zhang, Hou-Dao; Huang, Xuhui

    2016-04-01

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.

  10. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    International Nuclear Information System (INIS)

    Qiao, Qin; Zhang, Hou-Dao; Huang, Xuhui

    2016-01-01

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.

  11. SU-F-J-87: Impact Of The Dosimetric Consequences From Minimal Displacements Throughout The Treatment Time In APBI With SAVI Applicators

    Energy Technology Data Exchange (ETDEWEB)

    Chandrasekara, S; Pella, S [21st Century Oncology, Boca Raton, FL (United States); Hyvarinen, M; Pinder, J [Florida Atlantic University, Boca Raton, FL (United States)

    2016-06-15

    Purpose: To assess the variation in dose received by the organs at risk (OARs) due to inter-fractional motion by SAVI to determine the importance of providing proper immobilization Methods: An analysis of 15 patients treated with SAVI applicators were considered for this study. Treatment planning teams did not see significant changes in their CT scans through scout images and initial treatment plan was used for the entire treatment. These scans, taken before each treatment were imported in to the treatment planning system and were fused together with respective to the applicator, using landmark registration. Dosimetric evaluations were performed. Dose received by skin, ribs and PTV(Planning target volume) respect to the initial treatment plan were measured. Results: Contours of the OARs were not similar with the initial image. Deduction in volumes of PTV and cavity, small deviations in displacements from the applicator to the OARs, difference in doses received by the OARs between treatments were noticed. The maximum, minimum, average doses varied between 10% to 20% 5% to 8% and 15% to 20% in ribs and skin. The 0.1cc doses to OARs showed an average change of 10% of the prescribed dose. PTV was receiving a different dose than the estimated dose Conclusion: The variation in volumes and isodoses related to the OARs, PTV receiving a lesser dose than the prescribed dose indicate that the estimated doses are different from the received dose. This study reveals the urgent need of improving the immobilization methods. Taking a CT scan before each treatment and replanning is helpful to minimize the risk of delivering undesired high doses to the OARs. Patient positioning, motion, respiration, observer differences and time lap between the planning and treating can arise more complications. VacLock, Positioning cushions, Image guided brachytherapy and adjustable registration should be used for further improvements.

  12. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    Energy Technology Data Exchange (ETDEWEB)

    Qiao, Qin, E-mail: qqiao@ust.hk; Zhang, Hou-Dao [Department of Chemistry, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon (Hong Kong); Huang, Xuhui, E-mail: xuhuihuang@ust.hk [Department of Chemistry, Division of Biomedical Engineering, Center of Systems Biology and Human Health, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon (Hong Kong); The HKUST Shenzhen Research Institute, Shenzhen (China)

    2016-04-21

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.

  13. The minimally tuned minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Essig, Rouven; Fortin, Jean-Francois

    2008-01-01

    The regions in the Minimal Supersymmetric Standard Model with the minimal amount of fine-tuning of electroweak symmetry breaking are presented for general messenger scale. No a priori relations among the soft supersymmetry breaking parameters are assumed and fine-tuning is minimized with respect to all the important parameters which affect electroweak symmetry breaking. The superpartner spectra in the minimally tuned region of parameter space are quite distinctive with large stop mixing at the low scale and negative squark soft masses at the high scale. The minimal amount of tuning increases enormously for a Higgs mass beyond roughly 120 GeV

  14. The zero inflation of standing dead tree carbon stocks

    Science.gov (United States)

    Christopher W. Woodall; David W. MacFarlane

    2012-01-01

    Given the importance of standing dead trees in numerous forest ecosystem attributes/processes such as carbon (C) stocks, the USDA Forest Service’s Forest Inventory and Analysis (FIA) program began consistent nationwide sampling of standing dead trees in 1999. Modeled estimates of standing dead tree C stocks are currently used as the official C stock estimates for the...

  15. 14 CFR 1203b.106 - Use of deadly force.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Use of deadly force. 1203b.106 Section... AUTHORITY AND USE OF FORCE BY NASA SECURITY FORCE PERSONNEL § 1203b.106 Use of deadly force. Deadly force shall be used only in those circumstances where the security force officer reasonably believes that...

  16. 10 CFR 1047.7 - Use of deadly force.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Use of deadly force. 1047.7 Section 1047.7 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) LIMITED ARREST AUTHORITY AND USE OF FORCE BY PROTECTIVE FORCE OFFICERS General Provisions § 1047.7 Use of deadly force. (a) Deadly force means that force which a...

  17. Perturbative search for dead-end CFTs

    International Nuclear Information System (INIS)

    Nakayama, Yu

    2015-01-01

    To explore the possibility of self-organized criticality, we look for CFTs without any relevant scalar deformations (a.k.a. dead-end CFTs) within power-counting renormalizable quantum field theories with a weakly coupled Lagrangian description. In three dimensions, the only candidates are pure (Abelian) gauge theories, which may be further deformed by Chern-Simons terms. In four dimensions, we show that there are infinitely many non-trivial candidates based on chiral gauge theories. Using the three-loop beta functions, we compute the gap of scaling dimensions above the marginal value, and it can be as small as O(10"−"5) and robust against the perturbative corrections. These classes of candidates are very weakly coupled and our perturbative conclusion seems difficult to refute. Thus, the hypothesis that non-trivial dead-end CFTs do not exist is likely to be false in four dimensions.

  18. Potential Evaporite Biomarkers from the Dead Sea

    Science.gov (United States)

    Morris, Penny A.; Wentworth, Susan J.; Thomas-Keprta, Kathie; Allen, Carlton C.; McKay, David S.

    2001-01-01

    The Dead Sea is located on the northern branch of the African-Levant Rift systems. The rift system, according to one model, was formed by a series of strike slip faults, initially forming approximately two million years ago. The Dead Sea is an evaporite basin that receives freshwater from springs and from the Jordan River. The Dead Sea is different from other evaporite basins, such as the Great Salt Lake, in that it possesses high concentrations of magnesium and has an average pH of 6.1. The dominant cation in the Great Salt Lake is sodium, and the pH is 7.7. Calcium concentrations are also higher in the Dead Sea than in the Great Salt Lake. Both basins are similar in that the dominant anion is chlorine and the salinity levels are approximately 20 %. Other common cations that have been identified from the waters of the Dead Sea and the Great Salt Lake include sodium and potassium. A variety of Archea, Bacteria, and a single genus of a green algal, Dunaliella, has been described from the Dead Sea. Earlier studies concentrated on microbial identification and analysis of their unique physiology that allows them to survive in this type of extreme environment. Potential microbial fossilization processes, microbial fossils, and the metallic ions associated with fossilization have not been studied thoroughly. The present study is restricted to identifying probable microbial morphologies and associated metallic ions. XRD (X Ray Diffraction) analysis indicates the presence of halite, quartz, and orthoclase feldspar. In addition to these minerals, other workers have reported potassium chloride, magnesium bromide, magnesium chloride, calcium chloride, and calcium sulfate. Halite, calcium sulfate, and orthoclase were examined in this report for the presence of microbes, microbially induced deposits or microbial alteration. Neither the gypsum nor the orthoclase surfaces possesses any obvious indications of microbial life or fossilization. The sand-sized orthoclase particles are

  19. Immunoglobulin kappa deleting element rearrangements in precursor-B acute lymphoblastic leukemia are stable targets for detection of minimal residual disease by real-time quantitative PCR

    NARCIS (Netherlands)

    van der Velden, V. H. J.; Willemse, M. J.; van der Schoot, C. E.; Hählen, K.; van Wering, E. R.; van Dongen, J. J. M.

    2002-01-01

    Immunoglobulin gene rearrangements are used as PCR targets for detection of minimal residual disease (MRD) in acute lymphoblastic leukemia (ALL). We Investigated the occurrence of monoclonal immunoglobulin kappa-deleting element (IGK-Kde) rearrangements by Southern blotting and PCR/heteroduplex

  20. The minimal non-minimal standard model

    International Nuclear Information System (INIS)

    Bij, J.J. van der

    2006-01-01

    In this Letter I discuss a class of extensions of the standard model that have a minimal number of possible parameters, but can in principle explain dark matter and inflation. It is pointed out that the so-called new minimal standard model contains a large number of parameters that can be put to zero, without affecting the renormalizability of the model. With the extra restrictions one might call it the minimal (new) non-minimal standard model (MNMSM). A few hidden discrete variables are present. It is argued that the inflaton should be higher-dimensional. Experimental consequences for the LHC and the ILC are discussed

  1. Visualization of deuterium dead layer by atom probe tomography

    KAUST Repository

    Gemma, Ryota

    2012-12-01

    The first direct observation, by atom probe tomography, of a deuterium dead layer is reported for Fe/V multilayered film loaded with D solute atoms. The thickness of the dead layers was measured to be 0.4-0.5 nm. The dead layers could be distinguished from chemically intermixed layers. The results suggest that the dead layer effect occurs even near the interface of the mixing layers, supporting an interpretation that the dead layer effect cannot be explained solely by electronic charge transfer but also involves a modulation of rigidity. © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  2. Visualization of deuterium dead layer by atom probe tomography

    KAUST Repository

    Gemma, Ryota; Al-Kassab, Talaat; Kirchheim, Reiner; Pundt, Astrid A.

    2012-01-01

    The first direct observation, by atom probe tomography, of a deuterium dead layer is reported for Fe/V multilayered film loaded with D solute atoms. The thickness of the dead layers was measured to be 0.4-0.5 nm. The dead layers could be distinguished from chemically intermixed layers. The results suggest that the dead layer effect occurs even near the interface of the mixing layers, supporting an interpretation that the dead layer effect cannot be explained solely by electronic charge transfer but also involves a modulation of rigidity. © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  3. New perspectives on interdisciplinary earth science at the Dead Sea: The DESERVE project

    Energy Technology Data Exchange (ETDEWEB)

    Kottmeier, Christoph, E-mail: Christoph.Kottmeier@kit.edu [Karlsruhe Institute of Technology, Hermann von Helmholtz Platz 1, Eggenstein-Leopoldshafen (Germany); Agnon, Amotz [The Hebrew University of Jerusalem, Jerusalem (Israel); Al-Halbouni, Djamil [GFZ German Research Centre for Geosciences, Telegrafenberg, 14473 Potsdam (Germany); Alpert, Pinhas [Tel Aviv University, Tel Aviv-Yafo (Israel); Corsmeier, Ulrich [Karlsruhe Institute of Technology, Hermann von Helmholtz Platz 1, Eggenstein-Leopoldshafen (Germany); Dahm, Torsten [GFZ German Research Centre for Geosciences, Telegrafenberg, 14473 Potsdam (Germany); Eshel, Adam [Tel Aviv University, Tel Aviv-Yafo (Israel); Geyer, Stefan [Helmholtz Centre for Environmental Research GmbH — UFZ, Theodor-Lieser-Strasse 4, 06120 Halle (Germany); Haas, Michael; Holohan, Eoghan [GFZ German Research Centre for Geosciences, Telegrafenberg, 14473 Potsdam (Germany); Kalthoff, Norbert [Karlsruhe Institute of Technology, Hermann von Helmholtz Platz 1, Eggenstein-Leopoldshafen (Germany); Kishcha, Pavel [Tel Aviv University, Tel Aviv-Yafo (Israel); Krawczyk, Charlotte [Leibniz Institute for Applied Geophysics (LIAG), Stilleweg 2, 30655 Hannover (Germany); Lati, Joseph [Tel Aviv University, Tel Aviv-Yafo (Israel); Laronne, Jonathan B. [Ben Gurion University of the Negev, Be' er Sheva (Israel); Lott, Friederike [Karlsruhe Institute of Technology, Hermann von Helmholtz Platz 1, Eggenstein-Leopoldshafen (Germany); Mallast, Ulf; Merz, Ralf [Helmholtz Centre for Environmental Research GmbH — UFZ, Theodor-Lieser-Strasse 4, 06120 Halle (Germany); Metzger, Jutta [Karlsruhe Institute of Technology, Hermann von Helmholtz Platz 1, Eggenstein-Leopoldshafen (Germany); Mohsen, Ayman [An-Najah National University, Nablus, Palestine (Country Unknown); and others

    2016-02-15

    The Dead Sea region has faced substantial environmental challenges in recent decades, including water resource scarcity, ~ 1 m annual decreases in the water level, sinkhole development, ascending-brine freshwater pollution, and seismic disturbance risks. Natural processes are significantly affected by human interference as well as by climate change and tectonic developments over the long term. To get a deep understanding of processes and their interactions, innovative scientific approaches that integrate disciplinary research and education are required. The research project DESERVE (Helmholtz Virtual Institute Dead Sea Research Venue) addresses these challenges in an interdisciplinary approach that includes geophysics, hydrology, and meteorology. The project is implemented by a consortium of scientific institutions in neighboring countries of the Dead Sea (Israel, Jordan, Palestine Territories) and participating German Helmholtz Centres (KIT, GFZ, UFZ). A new monitoring network of meteorological, hydrological, and seismic/geodynamic stations has been established, and extensive field research and numerical simulations have been undertaken. For the first time, innovative measurement and modeling techniques have been applied to the extreme conditions of the Dead Sea and its surroundings. The preliminary results show the potential of these methods. First time ever performed eddy covariance measurements give insight into the governing factors of Dead Sea evaporation. High-resolution bathymetric investigations reveal a strong correlation between submarine springs and neo-tectonic patterns. Based on detailed studies of stratigraphy and borehole information, the extension of the subsurface drainage basin of the Dead Sea is now reliably estimated. Originality has been achieved in monitoring flash floods in an arid basin at its outlet and simultaneously in tributaries, supplemented by spatio-temporal rainfall data. Low-altitude, high resolution photogrammetry, allied to

  4. New perspectives on interdisciplinary earth science at the Dead Sea: The DESERVE project

    International Nuclear Information System (INIS)

    Kottmeier, Christoph; Agnon, Amotz; Al-Halbouni, Djamil; Alpert, Pinhas; Corsmeier, Ulrich; Dahm, Torsten; Eshel, Adam; Geyer, Stefan; Haas, Michael; Holohan, Eoghan; Kalthoff, Norbert; Kishcha, Pavel; Krawczyk, Charlotte; Lati, Joseph; Laronne, Jonathan B.; Lott, Friederike; Mallast, Ulf; Merz, Ralf; Metzger, Jutta; Mohsen, Ayman

    2016-01-01

    The Dead Sea region has faced substantial environmental challenges in recent decades, including water resource scarcity, ~ 1 m annual decreases in the water level, sinkhole development, ascending-brine freshwater pollution, and seismic disturbance risks. Natural processes are significantly affected by human interference as well as by climate change and tectonic developments over the long term. To get a deep understanding of processes and their interactions, innovative scientific approaches that integrate disciplinary research and education are required. The research project DESERVE (Helmholtz Virtual Institute Dead Sea Research Venue) addresses these challenges in an interdisciplinary approach that includes geophysics, hydrology, and meteorology. The project is implemented by a consortium of scientific institutions in neighboring countries of the Dead Sea (Israel, Jordan, Palestine Territories) and participating German Helmholtz Centres (KIT, GFZ, UFZ). A new monitoring network of meteorological, hydrological, and seismic/geodynamic stations has been established, and extensive field research and numerical simulations have been undertaken. For the first time, innovative measurement and modeling techniques have been applied to the extreme conditions of the Dead Sea and its surroundings. The preliminary results show the potential of these methods. First time ever performed eddy covariance measurements give insight into the governing factors of Dead Sea evaporation. High-resolution bathymetric investigations reveal a strong correlation between submarine springs and neo-tectonic patterns. Based on detailed studies of stratigraphy and borehole information, the extension of the subsurface drainage basin of the Dead Sea is now reliably estimated. Originality has been achieved in monitoring flash floods in an arid basin at its outlet and simultaneously in tributaries, supplemented by spatio-temporal rainfall data. Low-altitude, high resolution photogrammetry, allied to

  5. Minimal Poems Written in 1979 Minimal Poems Written in 1979

    Directory of Open Access Journals (Sweden)

    Sandra Sirangelo Maggio

    2008-04-01

    Full Text Available The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism.

  6. A new 36Cl hydrological model and 36Cl systematics in the Jordan River/Dead Sea system

    International Nuclear Information System (INIS)

    Paul, M.; Fink, D.; Meirav, O.; Kaim, R.

    1986-01-01

    Accelerator mass spectrometry results of 36 Cl for the Jordan River/Dead Sea system show that the amount of chloride leached from rocks ranges from approx. 70% in source springs to >90% in water bodies downstream. Furthermore, the amount of water left after evaporation decreases from approx. 50% in the source springs to 20% in the intermediate Lake Kinneret. In the terminal Dead Sea, 99% of the stable chloride originates from ancient rocks and evaporite formations while approx. 80% of its 36 Cl content is of meteoric origin. Using 36 Cl measurements, the accumulation time of the Dead Sea salt is estimated to be 19,000-25,000 yr. (author)

  7. Cost-minimized combinations of wind power, solar power and electrochemical storage, powering the grid up to 99.9% of the time

    DEFF Research Database (Denmark)

    Budischak, Cory; Sewell, DeAnna; Thomson, Heather

    2013-01-01

    intermittent power, we seek combinations of diverse renewables at diverse sites, with storage, that are not intermittent and satisfy need a given fraction of hours. And 2) we seek minimal cost, calculating true cost of electricity without subsidies and with inclusion of external costs. Our model evaluated over...... renewable generation and the excess capacity together meet electric load with less storage, lowering total system cost. At 2030 technology costs and with excess electricity displacing natural gas, we find that the electric system can be powered 90%–99.9% of hours entirely on renewable electricity, at costs...

  8. Open ocean dead zones in the tropical North Atlantic Ocean

    Science.gov (United States)

    Karstensen, J.; Fiedler, B.; Schütte, F.; Brandt, P.; Körtzinger, A.; Fischer, G.; Zantopp, R.; Hahn, J.; Visbeck, M.; Wallace, D.

    2015-04-01

    Here we present first observations, from instrumentation installed on moorings and a float, of unexpectedly low (rates for the eddies are found to be 3 to 5 times higher when compared with surrounding waters. Oxygen is lowest in the centre of the eddies, in a depth range where the swirl velocity, defining the transition between eddy and surroundings, has its maximum. It is assumed that the strong velocity at the outer rim of the eddies hampers the transport of properties across the eddies boundary and as such isolates their cores. This is supported by a remarkably stable hydrographic structure of the eddies core over periods of several months. The eddies propagate westward, at about 4 to 5 km day-1, from their generation region off the West African coast into the open ocean. High productivity and accompanying respiration, paired with sluggish exchange across the eddy boundary, create the "dead zone" inside the eddies, so far only reported for coastal areas or lakes. We observe a direct impact of the open ocean dead zones on the marine ecosystem as such that the diurnal vertical migration of zooplankton is suppressed inside the eddies.

  9. Foehn-induced effects on local dust pollution, frontal clouds and solar radiation in the Dead Sea valley

    Science.gov (United States)

    Kishcha, Pavel; Starobinets, Boris; Savir, Amit; Alpert, Pinhas; Kaplan, Michael

    2018-06-01

    Despite the long history of investigation of foehn phenomena, there are few studies of the influence of foehn winds on air pollution and none in the Dead Sea valley. For the first time the foehn phenomenon and its effects on local dust pollution, frontal cloudiness and surface solar radiation were analyzed in the Dead Sea valley, as it occurred on 22 March 2013. This was carried out using both numerical simulations and observations. The foehn winds intensified local dust emissions, while the foehn-induced temperature inversion trapped dust particles beneath this inversion. These two factors caused extreme surface dust concentration in the western Dead Sea valley. The dust pollution was transported by west winds eastward, to the central Dead Sea valley, where the speed of these winds sharply decreased. The transported dust was captured by the ascending airflow contributing to the maximum aerosol optical depth (AOD) over the central Dead Sea valley. On the day under study, the maximum surface dust concentration did not coincide with the maximum AOD: this being one of the specific effects of the foehn phenomenon on dust pollution in the Dead Sea valley. Radar data showed a passage of frontal cloudiness through the area of the Dead Sea valley leading to a sharp drop in noon solar radiation. The descending airflow over the downwind side of the Judean Mountains led to the formation of a cloud-free band followed by only the partial recovery of solar radiation because of the extreme dust pollution caused by foehn winds.

  10. Asteroid 'Bites the Dust' Around Dead Star

    Science.gov (United States)

    2009-01-01

    NASA's Spitzer Space Telescope set its infrared eyes upon the dusty remains of shredded asteroids around several dead stars. This artist's concept illustrates one such dead star, or 'white dwarf,' surrounded by the bits and pieces of a disintegrating asteroid. These observations help astronomers better understand what rocky planets are made of around other stars. Asteroids are leftover scraps of planetary material. They form early on in a star's history when planets are forming out of collisions between rocky bodies. When a star like our sun dies, shrinking down to a skeleton of its former self called a white dwarf, its asteroids get jostled about. If one of these asteroids gets too close to the white dwarf, the white dwarf's gravity will chew the asteroid up, leaving a cloud of dust. Spitzer's infrared detectors can see these dusty clouds and their various constituents. So far, the telescope has identified silicate minerals in the clouds polluting eight white dwarfs. Because silicates are common in our Earth's crust, the results suggest that planets similar to ours might be common around other stars.

  11. Fungi colonizing dead leaves of herbs

    Directory of Open Access Journals (Sweden)

    Maria Kowalik

    2013-04-01

    Full Text Available The material was collected from the Botanical Garden and the Collegium Medicum Medicinal Plant Garden of the Jagiellonian University in Krakow. The investigated species were: lemon balm (Mellisa officinalis L., common lavender (Lavendula angustifolia Mill., horsemint (Mentha longifolia L., sage (Salvia officinalis L., sweet basil (Ocimum basilicum L., and wild marjoram (Origanum vulgare L.. The aim of the investigation was to identify fungi causing the death of leaf tissues of herbs from the mint family Lamiaceae. In mycological investigations, 180 fragments of each plant leaves (1,080 dead leaf fragments in total were placed in a 2% PDA medium. Over 970 colonies of fungi belonging to 48 species were isolated from the dead leaf tissues of the six herb species. Alternaria alternata (toxin-producing, Epicoccum nigrum and Sordaria fimicola were the most frequently isolated. The largest numbers of colonies and species of fungi were isolated from horsemint, while the lowest numbers were from wild marjoram leaves. It was shown that the death of leaves of selected herb species from the Lamiaceae family was caused by various fungi. The results of the mycological analysis confirmed the diversity of species colonizing the leaves of the herbs.

  12. Breathing Life Into Dead-Zones

    Directory of Open Access Journals (Sweden)

    Gressel Oliver

    2013-04-01

    Full Text Available The terrestrial planet formation regions of protoplanetary disks are generally sufficiently cold to be con- sidered non-magnetized and, consequently, dynamically inactive. However, recent investigations of these so-called “Dead-Zones” indicate the possibility that disks with strong mean radial temperature gradients can support instabilities associated with disk-normal gradients of the basic Keplerian shear profile. This process, known as the Goldreich-Schubert-Fricke (GSF instability, is the instability of short radial wavelength inertial modes and depends wholly on the presence of vertical gradients of the mean Keplerian (zonal flow. We report here high resolution fully nonlinear axisymmetric numerical studies of this instability and find a number of features including how, in the nonlinear saturated state, unstable discs become globally distorted, with strong vertical oscillations occurring at all radii due to local instability. We find that nonaxisymmetric numerical experiments are accompanied by significant amounts angular momentum transport (α ~ 0001. This instability should be operating in the Dead-Zones of protoplanetary disks at radii greater than 10-15 AU in minimum mass solar nebula models.

  13. Isoperimetric inequalities for minimal graphs

    International Nuclear Information System (INIS)

    Pacelli Bessa, G.; Montenegro, J.F.

    2007-09-01

    Based on Markvorsen and Palmer's work on mean time exit and isoperimetric inequalities we establish slightly better isoperimetric inequalities and mean time exit estimates for minimal graphs in N x R. We also prove isoperimetric inequalities for submanifolds of Hadamard spaces with tamed second fundamental form. (author)

  14. Method to minimize the low-frequency neutral-point voltage oscillations with time-offset injection for neutral-point-clamped inverters

    DEFF Research Database (Denmark)

    Choi, Uimin; Lee, Kyo-Beum; Blaabjerg, Frede

    2013-01-01

    This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time-offset to the three phase turn-on times. The proper time-offset is simply calculated considering the phase currents and dwell...

  15. Measurement of the dead time of a G.M. counter and of the secondary emission of the cathode by the method of the delayed coincidences; Mesure du temps mort d'un compteur G.M. et de l'emission secondaire de la cathode par la methode des coincidences retardees

    Energy Technology Data Exchange (ETDEWEB)

    Picard, E; Rogozinski, A [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1953-07-01

    The dead time of a G.M counter is measured with the method of delayed coincidences. The pulses of the counter that supplies the circuit of coincidences, arrive there, on the one hand, directly, and in the other part, after a known and variable delay. This method permits besides, to study the parasitic impulses coming from the impact of the positive ions on the cathode of the meter. From the results relative to several counters working in various conditions are given. (author) [French] Le temps mort d'un compteur G.M. est mesure a l'aide d'un methode de coincidences retardees. Les impulsions du compteur qui alimentent le circuit de coincidences, y parviennent, d'une part, directement, et, dautre part, apres un retard connu et variable. Cette methode permet de plus, d'etudier les impulsions parasites provenant de l'impact des ions positifs sur la cathode du compteur. Des resultats relatifs a plusieurs compteurs fonctionnant dans des conditions diverses sont donnes. (auteur)

  16. Regularity of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht

    2010-01-01

    "Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t

  17. The ecosystem service value of living versus dead biogenic reef

    Science.gov (United States)

    Sheehan, E. V.; Bridger, D.; Attrill, M. J.

    2015-03-01

    Mixed maerl beds (corralline red algae) comprise dead thalli with varying amounts of live maerl fragments, but previously it was not known whether the presence of the live maerl increases the ecosystem service 'habitat provision' of the dead maerl for the associated epibenthos. A 'flying array' towed sled with high definition video was used to film transects of the epibenthos in dead maerl and mixed maerl beds in two locations to the north and south of the English Channel (Falmouth and Jersey). Mixed maerl beds supported greater number of taxa and abundance than dead beds in Falmouth, while in Jersey, mixed and dead beds supported similar number of taxa and dead beds had a greater abundance of epifauna. Scallops tended to be more abundant on mixed beds than dead beds. Tube worms were more abundant on mixed beds in Falmouth and dead beds in Jersey. An increasing percentage occurrence of live maerl thalli correlated with increasing number of taxa in Falmouth but not Jersey. It was concluded that while live thalli can increase the functional role of dead maerl beds for the epibenthos, this is dependent on location and response variable. As a result of this work, maerl habitat in SE Jersey has been protected from towed demersal fishing gear.

  18. SU-F-T-345: Quasi-Dead Beams: Clinical Relevance and Implications for Automatic Planning

    Energy Technology Data Exchange (ETDEWEB)

    Price, R; Veltchev, I; Lin, T; Gleason, R; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)

    2016-06-15

    Purpose: Beam direction selection for fixed-beam IMRT planning is typically a manual process. Severe dose-volume limits on critical structures in the thorax often result in atypical selection of beam directions as compared to other body sites. This work demonstrates the potential consequences as well as clinical relevance. Methods: 21 thoracic cases treated with 5–7 beam directions, 6 cases including non-coplanar arrangements, with fractional doses of 150–411cGy were analyzed. Endpoints included per-beam modulation scaling factor (MSF), variation from equal weighting, and delivery QA passing rate. Results: During analysis of patient-specific delivery QA a sub-standard passing rate was found for a single 5-field plan (90.48% of pixels evaluated passing 3% dose, 3mm DTA). During investigation it was found that a single beam demonstrated a MSF of 34.7 and contributed only 2.7% to the mean dose of the target. In addition, the variation from equal weighting for this beam was 17.3% absolute resulting in another beam with a MSF of 4.6 contributing 41.9% to the mean dose to the target; a variation of 21.9% from equal weighting. The average MSF for the remaining 20 cases was 4.0 (SD 1.8) with an average absolute deviation of 2.8% from equal weighting (SD 3.1%). Conclusion: Optimization in commercial treatment planning systems typically results in relatively equally weighted beams. Extreme variation from this can result in excessively high MSFs (very small segments) and potential decreases in agreement between planned and delivered dose distributions. In addition, the resultant beam may contribute minimal dose to the target (quasi-dead beam); a byproduct being increased treatment time and associated localization uncertainties. Potential ramifications exist for automatic planning algorithms should they allow for user-defined beam directions. Additionally, these quasi-dead beams may be embedded in the libraries for model-based systems potentially resulting in inefficient

  19. Are major reductions in new HIV infections possible with people who inject drugs? The case for low dead-space syringes in highly affected countries.

    Science.gov (United States)

    Zule, William A; Cross, Harry E; Stover, John; Pretorius, Carel

    2013-01-01

    Circumstantial evidence from laboratory studies, mathematical models, ecological studies and bio behavioural surveys, suggests that injection-related HIV epidemics may be averted or reversed if people who inject drugs (PWID) switch from using high dead-space to using low dead-space syringes. In laboratory experiments that simulated the injection process and rinsing with water, low dead space syringes retained 1000 times less blood than high dead space syringes. In mathematical models, switching PWID from high dead space to low dead space syringes prevents or reverses injection-related HIV epidemics. No one knows if such an intervention is feasible or what effect it would have on HIV transmission among PWID. Feasibility studies and randomized controlled trials (RCTs) will be needed to answer these questions definitively, but these studies will be very expensive and take years to complete. Rather than waiting for them to be completed, we argue for an approach similar to that used with needle and syringe programs (NSP), which were promoted and implemented before being tested more rigorously. Before implementation, rapid assessments that involve PWID will need to be conducted to ensure buy-in from PWID and other local stakeholders. This commentary summarizes the existing evidence regarding the protective effects of low dead space syringes and estimates potential impacts on HIV transmission; it describes potential barriers to transitioning PWID from high dead space to low dead space needles and syringes; and it presents strategies for overcoming these barriers. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. The effects of a western spruce budworm outbreak on the dead wood component in relation to ownership in forests of eastern Oregon

    Science.gov (United States)

    David. Azuma

    2010-01-01

    Forest Inventory and Analysis data were used to investigate the effects of a severe western spruce budworm outbreak on the dead wood component of forests in 11 counties of eastern Oregon for two time periods. The ownership and the level of damage (as assessed by aerial surveys) affected the resulting down woody material and standing dead trees. The pattern of coarse...

  1. The use of time-of-flight camera for navigating robots in computer-aided surgery: monitoring the soft tissue envelope of minimally invasive hip approach in a cadaver study.

    Science.gov (United States)

    Putzer, David; Klug, Sebastian; Moctezuma, Jose Luis; Nogler, Michael

    2014-12-01

    Time-of-flight (TOF) cameras can guide surgical robots or provide soft tissue information for augmented reality in the medical field. In this study, a method to automatically track the soft tissue envelope of a minimally invasive hip approach in a cadaver study is described. An algorithm for the TOF camera was developed and 30 measurements on 8 surgical situs (direct anterior approach) were carried out. The results were compared to a manual measurement of the soft tissue envelope. The TOF camera showed an overall recognition rate of the soft tissue envelope of 75%. On comparing the results from the algorithm with the manual measurements, a significant difference was found (P > .005). In this preliminary study, we have presented a method for automatically recognizing the soft tissue envelope of the surgical field in a real-time application. Further improvements could result in a robotic navigation device for minimally invasive hip surgery. © The Author(s) 2014.

  2. Method to Minimize the Low-Frequency Neutral-Point Voltage Oscillations With Time-Offset Injection for Neutral-Point-Clamped Inverters

    DEFF Research Database (Denmark)

    Choi, Ui-Min; Blaabjerg, Frede; Lee, Kyo-Beum

    2015-01-01

    time of small- and medium-voltage vectors. However, if the power factor is lower, there is a limitation to eliminate neutral-point oscillations. In this case, the proposed method can be improved by changing the switching sequence properly. Additionally, a method for neutral-point voltage balancing......This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time offset to the three-phase turn-on times. The proper time offset is simply calculated considering the phase currents and dwell...

  3. Metagenome of a Versatile Chemolithoautotroph from Expanding Oceanic Dead Zones

    Energy Technology Data Exchange (ETDEWEB)

    Walsh, David A.; Zaikova, Elena; Howes, Charles L.; Song, Young; Wright, Jody; Tringe, Susannah G.; Tortell, Philippe D.; Hallam, Steven J.

    2009-07-15

    Oxygen minimum zones (OMZs), also known as oceanic"dead zones", are widespread oceanographic features currently expanding due to global warming and coastal eutrophication. Although inhospitable to metazoan life, OMZs support a thriving but cryptic microbiota whose combined metabolic activity is intimately connected to nutrient and trace gas cycling within the global ocean. Here we report time-resolved metagenomic analyses of a ubiquitous and abundant but uncultivated OMZ microbe (SUP05) closely related to chemoautotrophic gill symbionts of deep-sea clams and mussels. The SUP05 metagenome harbors a versatile repertoire of genes mediating autotrophic carbon assimilation, sulfur-oxidation and nitrate respiration responsive to a wide range of water column redox states. Thus, SUP05 plays integral roles in shaping nutrient and energy flow within oxygen-deficient oceanic waters via carbon sequestration, sulfide detoxification and biological nitrogen loss with important implications for marine productivity and atmospheric greenhouse control.

  4. Spiritual bonds to the dead in cross-cultural and historical perspective: comparative religion and modern grief.

    Science.gov (United States)

    Klass, D; Goss, R

    1999-09-01

    Contemporary spirituality within continuing bonds with the dead is placed into the comparative context of Western Christianity and Japanese Buddhism. Throughout history, humans have maintained interaction with two kinds of dead: ancestors and sacred dead, the first characterized by symmetrical relationships and the second by asymmetrical. Continuing bonds are deeply connected with, and are often in conflict with, bonds to the nation and (in the West) to God. In this framework, the authors find that continuing bonds in the present function within the private sphere and have very limited functions within the larger society, resemble traditional bonds with the sacred dead, and, at this time, offer a mild critique of the values and lifestyles on which consumer capitalism is based.

  5. Minimally invasive orthognathic surgery.

    Science.gov (United States)

    Resnick, Cory M; Kaban, Leonard B; Troulis, Maria J

    2009-02-01

    Minimally invasive surgery is defined as the discipline in which operative procedures are performed in novel ways to diminish the sequelae of standard surgical dissections. The goals of minimally invasive surgery are to reduce tissue trauma and to minimize bleeding, edema, and injury, thereby improving the rate and quality of healing. In orthognathic surgery, there are two minimally invasive techniques that can be used separately or in combination: (1) endoscopic exposure and (2) distraction osteogenesis. This article describes the historical developments of the fields of orthognathic surgery and minimally invasive surgery, as well as the integration of the two disciplines. Indications, techniques, and the most current outcome data for specific minimally invasive orthognathic surgical procedures are presented.

  6. Correlates of minimal dating.

    Science.gov (United States)

    Leck, Kira

    2006-10-01

    Researchers have associated minimal dating with numerous factors. The present author tested shyness, introversion, physical attractiveness, performance evaluation, anxiety, social skill, social self-esteem, and loneliness to determine the nature of their relationships with 2 measures of self-reported minimal dating in a sample of 175 college students. For women, shyness, introversion, physical attractiveness, self-rated anxiety, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. For men, physical attractiveness, observer-rated social skill, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. The patterns of relationships were not identical for the 2 indicators of minimal dating, indicating the possibility that minimal dating is not a single construct as researchers previously believed. The present author discussed implications and suggestions for future researchers.

  7. Hexavalent Chromium Minimization Strategy

    Science.gov (United States)

    2011-05-01

    Logistics 4 Initiative - DoD Hexavalent Chromium Minimization Non- Chrome Primer IIEXAVAJ ENT CHRO:M I~UMI CHROMIUM (VII Oil CrfVli.J CANCEfl HAnRD CD...Management Office of the Secretary of Defense Hexavalent Chromium Minimization Strategy Report Documentation Page Form ApprovedOMB No. 0704-0188...00-2011 4. TITLE AND SUBTITLE Hexavalent Chromium Minimization Strategy 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  8. Minimal Super Technicolor

    DEFF Research Database (Denmark)

    Antola, M.; Di Chiara, S.; Sannino, F.

    2011-01-01

    We introduce novel extensions of the Standard Model featuring a supersymmetric technicolor sector (supertechnicolor). As the first minimal conformal supertechnicolor model we consider N=4 Super Yang-Mills which breaks to N=1 via the electroweak interactions. This is a well defined, economical......, between unparticle physics and Minimal Walking Technicolor. We consider also other N =1 extensions of the Minimal Walking Technicolor model. The new models allow all the standard model matter fields to acquire a mass....

  9. The prevalence and challenges of abandoned dead neonates in an ...

    African Journals Online (AJOL)

    parents/caregivers' attitudes toward dead neonates. Hospital-based postbereavement programs should be organized to ... Dead neonates at the Neonatal Intensive Care Units,. Pediatric Emergency Department, Pediatric Surgical .... interventions and newborn survival. Niger J Med 2006; 15:108–114. 3 Kalkofen RW. After a ...

  10. Dead wood inventory and assessment in South Korea

    Science.gov (United States)

    Jong-Su Yim; Rae Hyun Kim; Sun-Jeong Lee; Yeongmo. Son

    2015-01-01

    Dead wood (DW) plays a critical role not only in maintaining biodiversity but also in stocking carbon under UNFCCC. From the 5th national forest inventory (NFI5; 2006-2010) in South Korea, field data relevant to the DW including standing and downed dead trees by four decay class, etc. were collected. Based on the NFI5 data,...

  11. Quantifying carbon stores and decomposition in dead wood: A review

    Science.gov (United States)

    Matthew B. Russell; Shawn Fraver; Tuomas Aakala; Jeffrey H. Gove; Christopher W. Woodall; Anthony W. D’Amato; Mark J. Ducey

    2015-01-01

    The amount and dynamics of forest dead wood (both standing and downed) has been quantified by a variety of approaches throughout the forest science and ecology literature. Differences in the sampling and quantification of dead wood can lead to differences in our understanding of forests and their role in the sequestration and emissions of CO2, as...

  12. Neither pre-operative education or a minimally invasive procedure have any influence on the recovery time after total hip replacement.

    Science.gov (United States)

    Biau, David Jean; Porcher, Raphael; Roren, Alexandra; Babinet, Antoine; Rosencher, Nadia; Chevret, Sylvie; Poiraudeau, Serge; Anract, Philippe

    2015-08-01

    The purpose of this study was to evaluate pre-operative education versus no education and mini-invasive surgery versus standard surgery to reach complete independence. We conducted a four-arm randomized controlled trial of 209 patients. The primary outcome criterion was the time to reach complete functional independence. Secondary outcomes included the operative time, the estimated total blood loss, the pain level, the dose of morphine, and the time to discharge. There was no significant effect of either education (HR: 1.1; P = 0.77) or mini-invasive surgery (HR: 1.0; 95 %; P = 0.96) on the time to reach complete independence. The mini-invasive surgery group significantly reduced the total estimated blood loss (P = 0.0035) and decreased the dose of morphine necessary for titration in the recovery (P = 0.035). Neither pre-operative education nor mini-invasive surgery reduces the time to reach complete functional independence. Mini-invasive surgery significantly reduces blood loss and the need for morphine consumption.

  13. The Role of Compensation Criteria to Minimize Face-Time Bias and Support Faculty Career Flexibility: An Approach to Enhance Career Satisfaction in Academic Pathology.

    Science.gov (United States)

    Howell, Lydia Pleotis; Elsbach, Kimberly D; Villablanca, Amparo C

    2016-01-01

    Work-life balance is important to recruitment and retention of the younger generation of medical faculty, but medical school flexibility policies have not been fully effective. We have reported that our school's policies are underutilized due to faculty concerns about looking uncommitted to career or team. Since policies include leaves and accommodations that reduce physical presence, faculty may fear "face-time bias," which negatively affects evaluation of those not "seen" at work. Face-time bias is reported to negatively affect salary and career progress. We explored face-time bias on a leadership level and described development of compensation criteria intended to mitigate face-time bias, raise visibility, and reward commitment and contribution to team/group goals. Leaders from 6 partner departments participated in standardized interviews and group meetings. Ten compensation plans were analyzed, and published literature was reviewed. Leaders did not perceive face-time issues but saw team pressure and perception of availability as performance motivators. Compensation plans were multifactor productivity based with many quantifiable criteria; few addressed team contributions. Using these findings, novel compensation criteria were developed based on a published model to mitigate face-time bias associated with team perceptions. Criteria for organizational citizenship to raise visibility and reward group outcomes were included. We conclude that team pressure and perception of availability have the potential to lead to bias and may contribute to underuse of flexibility policies. Recognizing organizational citizenship and cooperative effort via specific criteria in a compensation plan may enhance a culture of flexibility. These novel criteria have been effective in one pilot department.

  14. The debate on minimal deterrence

    International Nuclear Information System (INIS)

    Arbatov, A.; Karp, R.C.; Toth, T.

    1993-01-01

    Revitalization of debates on minimal nuclear deterrence at the present time is induced by the end of the Cold War and a number of unilateral and bilateral actions by the great powers to curtail nuclear arms race and reduce nuclear weapons arsenals

  15. Nutritive value and fermentation characteristics of alfalfa-mixed grass forage wrapped with minimal stretch film layers and stored for different lengths of time.

    Science.gov (United States)

    Coblentz, W K; Ogden, R K; Akins, M S; Chow, E A

    2017-07-01

    A key aspect of managing baled silages is to quickly achieve and then rigorously maintain anaerobic conditions within the silage mass. The concept of inserting an O 2 -limiting barrier (OB) into plastic commercial silage wraps has been evaluated previously, yielding mixed or inconclusive results. Our objective for this study was to maximize the challenge to a commercial polyethylene bale wrap, or the identical wrap containing an OB, by using minimal plastic (4 layers), and then extending storage periods as long as 357 d. Forty-eight 1.2 × 1.2-m large-round bales of alfalfa (Medicago sativa L.) and mixed grass forage (66.3 ± 8.66% alfalfa; DM basis) were made at 2 moisture concentrations [47.5 (ideal) or 36.1% (dry)], wrapped with 4 layers of plastic containing an OB or no OB, and then stored for 99, 243, or 357 d. After storage, yeast counts within the 0.15-m deep surface layer were not affected by treatment (mean = 5.85 log 10 cfu/g); mold counts could not be analyzed statistically because 26 bales were nondetectable at a 3.00 log 10 cfu/g detection limit, but means among detectable counts were numerically similar for OB (4.74 log 10 cfu/g) and no OB (4.77 log 10 cfu/g). Fermentation characteristics were most affected by initial bale moisture, resulting in a more acidic final pH for ideal compared with dry bales (5.52 vs. 6.00). This was facilitated by greater concentrations of total fermentation acids (3.80 vs. 1.45% of dry matter), lactic acid (2.24 vs. 0.71% of dry matter), and acetic acid (1.07 vs. 0.64% of dry matter) within ideal compared with dry silages. Plastic wrap type had no effect on final concentrations of any fermentation product. During fermentation and storage, we noted greater change in concentrations of fiber components and whole-plant ash within the 0.15-m deep surface layer than in the bale core, and these changes always differed statistically from 0 (no change) based on pre-ensiled baseline concentrations. Overall, concentrations of water

  16. 9 CFR 82.6 - Interstate movement of dead birds and dead poultry from a quarantined area.

    Science.gov (United States)

    2010-01-01

    ... provided in paragraph (b) of this section for dressed carcasses, dead birds and dead poultry, including any... poultry at the destination listed on the permit required by paragraph (a)(1) of this section. (b) Dressed... quarantined area only if: (1) The dressed carcasses are from birds or poultry that were slaughtered in a...

  17. Minimizing Mutual Couping

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna.......Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna....

  18. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail; Pottmann, Helmut; Grohs, Philipp

    2011-01-01

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ

  19. Is non-minimal inflation eternal?

    International Nuclear Information System (INIS)

    Feng, Chao-Jun; Li, Xin-Zhou

    2010-01-01

    The possibility that the non-minimal coupling inflation could be eternal is investigated. We calculate the quantum fluctuation of the inflaton in a Hubble time and find that it has the same value as that in the minimal case in the slow-roll limit. Armed with this result, we have studied some concrete non-minimal inflationary models including the chaotic inflation and the natural inflation, in which the inflaton is non-minimally coupled to the gravity. We find that the non-minimal coupling inflation could be eternal in some parameter spaces.

  20. Stratigraphy, climate and downhole logging data - an example from the ICDP Dead Sea deep drilling project

    Science.gov (United States)

    Coianiz, Lisa; Ben-Avraham, Zvi; Lazar, Michael

    2017-04-01

    During the late Quaternary a series of lakes occupied the Dead Sea tectonic basin. The sediments that accumulated within these lakes preserved the environmental history (tectonic and climatic) of the basin and its vicinity. Most of the information on these lakes was deduced from exposures along the marginal terraces of the modern Dead Sea, e.g. the exposures of the last glacial Lake Lisan and Holocene Dead Sea. The International Continental Drilling Program (ICDP) project conducted in the Dead Sea during 2010-2011 recovered several cores that were drilled in the deep depocenter of the lake (water depth of 300 m) and at the margin (depth of 3 m offshore Ein Gedi spa). New high resolution logging data combined with a detailed lithological description and published age models for the deep 5017-1-A borehole were used to establish a sequence stratigraphic framework for the Lakes Amora, Samra, Lisan and Zeelim strata. This study presents a stratigraphic timescale for reconstructing the last ca 225 ka. It provides a context within which the timing of key sequence surfaces identified in the distal part of the basin can be mapped on a regional and stratigraphic time frame. In addition, it permitted the examination of depositional system tracts and related driving mechanisms controlling their formation. The sequence stratigraphic model developed for the Northern Dead Sea Basin is based on the identification of sequence bounding surfaces including: sequence boundary (SB), transgressive surface (TS) and maximum flooding surface (MFS). They enabled the division of depositional sequences into a Lowstand systems tracts (LST), Transgressive systems tracts (TST) and Highstand systems tracts (HST), which can be interpreted in terms of relative lake level changes. The analysis presented here show that system tract stacking patterns defined for the distal 5017-1-A borehole can be correlated to the proximal part of the basin, and widely support the claim that changes in relative lake

  1. Study, development and validation of a dead-timeless electronic architecture concept for highly sensitive PET (Positron Emission Tomograph)

    International Nuclear Information System (INIS)

    Vert, P.E.

    2007-03-01

    Positron emission tomographs (PET) are fitted with highly capable reading electronics, which owns qualities and drawbacks. Highly accused at first as an explanation of the poor sensitivity of these imagers, the present study points out that the dead-times shared along the chains contribute to only 16 % in the data losses at a typical activity of 10 μCi/ml. The gross acquisition rates could thus be raised by 20 % through a suppression of these saturations. Looking in details at the philosophy of the acquisition procedure, a property appears to circumscribe the sensitivity even more: the timing resolution. The latter conditions, to the first order, the rejection capabilities of random events, part of the scattered ones and hence noise which is finally rated to the true coincidences the signal is made up of. Minimizing the resolving time goes through the suppression of the unneeded actors along with the adoption of a well adapted time-stamping method (optimal filtering). In doing so, the intrinsic channel resolution appears to be possibly lowered by a factor 7, reducing to 350 ps. The bottom value of the coincidence window may be narrowed as a consequence, leading to an increase of the NECR (noise equivalent count rate) by 50 per cent. At this stage, a time of flight (TOF) algorithms can be implemented. As an opportunist, it promises a reduction of the noise variance by 430 %, a gain that echoes on the NECR figure. Finally merging all these ideas allows to expect an improvement close to an order of magnitude on the NECR, with the hope of routine exams shortened by the same amount. In this context, it appeared logical to imagine a new electronics acquisition synoptic dedicated to fully pixelized PET. The number of channels blows up by the way when compared to the existing, this statement being partially balanced by the decision to fully integrate the electronics. The measures of the energy and time are planned to be performed with a single channel, with a continuous

  2. Off-Stream Watering Systems and Partial Barriers as a Strategy to Maximize Cattle Production and Minimize Time Spent in the Riparian Area

    Directory of Open Access Journals (Sweden)

    Ashley A. Rawluk

    2014-10-01

    Full Text Available A study was conducted in 2009 at two locations in Manitoba (Killarney and Souris, Canada to determine the impact of off-stream waterers (OSW with or without natural barriers on (i amount of time cattle spent in the 10 m buffer created within the riparian area, referred to as the riparian polygon (RP, (ii watering location (OSW or stream, and (iii animal performance measured as weight gain. This study was divided into three 28-day periods over the grazing season. At each location, the pasture—which ranged from 21.0 ha to 39.2 ha in size—was divided into three treatments: no OSW nor barriers (1CONT, OSW with barriers along the stream bank to deter cattle from watering at the stream (2BARR, and OSW without barriers (3NOBARR. Cattle in 2BARR spent less time in the RP in Periods 1 (p = 0.0002, 2 (p = 0.1116, and 3 (p < 0.0001 at the Killarney site compared to cattle in 3NOBARR at the same site. Cattle in 2BARR at the Souris site spent more time in the RP in Period 1 (p < 0.0001 and less time in Period 2 (p = 0.0002 compared to cattle in 3NOBARR. Cattle did use the OSW, but not exclusively, as watering at the stream was still observed. The observed inconsistency in the effectiveness of the natural barriers on deterring cattle from the riparian area between periods and locations may be partly attributable to the environmental conditions present during this field trial as well as difference in pasture size and the ability of the established barriers to deter cattle from using the stream as a water source. Treatment had no significant effect (p > 0.05 on cow and calf weights averaged over the summer period. These results indicate that the presence of an OSW does not create significant differences in animal performance when used in extensive pasture scenarios such as those studied within the present study. Whereas the barriers did not consistently discourage watering at the stream, the results provide some indication of the efficacy of the OSW as well

  3. Whittling Down the Wait Time: Exploring Models to Minimize the Delay from Initial Concern to Diagnosis and Treatment of Autism Spectrum Disorder.

    Science.gov (United States)

    Gordon-Lipkin, Eliza; Foster, Jessica; Peacock, Georgina

    2016-10-01

    The process from initial concerns to diagnosis of autism spectrum disorder (ASD) can be a long and complicated process. The traditional model for evaluation and diagnosis of ASD often consists of long wait-lists and evaluations that result in a 2-year difference between the earliest signs of ASD and mean age of diagnosis. Multiple factors contribute to this diagnostic bottleneck, including time-consuming evaluations, cost of care, lack of providers, and lack of comfort of primary care providers to diagnose autism. This article explores innovative clinical models that have been implemented to address this as well as future directions and opportunities. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Personal Identity and Resurrection from the Dead

    Directory of Open Access Journals (Sweden)

    Gasparov Igor

    2017-04-01

    Full Text Available The paper examines arguments of the “Christian materialist” Trenton Merricks that he provided in support of the claim that the Christian doctrine of resurrection from the dead is compatible with the materialist understanding of the nature of human beings. In his paper The Resurrection of the Body, Merricks discussed two aspects of the materialist interpretation of the traditional religious doctrine of the bodily resurrection. On the one hand, he analyses and tries to overcome objections against the possibility of the general resurrection in case the materialist understanding of the nature of human personality should be true (mainly the problem of the temporal gap. On the other hand, he provides some reasons why the materialist understanding of human nature is more relevant than its dualist counterpart to the doctrine of the bodily resurrection. The present paper evaluates his arguments and discusses the suggestion that the doctrine of resurrection is not only compatible with materialism, but is also tenable if human beings are identical with their physical bodies. The conclusion of the paper is that Merricks’ apologetic arguments achieve their aims in defending the doctrine of resurrection only partially; the resurrection doctrine appears more tenable if we accept the dualistic conception of human nature.

  5. Autopsies of the real: Resurrecting the dead

    Directory of Open Access Journals (Sweden)

    Valis, Noël

    2011-12-01

    Full Text Available The sense of the real, or the material—the dead body—as an inextricable part of the sacred does not disappear in the secular environment of the nineteenth and twentieth centuries. This article analyzes specific humanitarian narratives centered on the practice of autopsy and mummification, in which the traces of Catholicism act as a kind of spectral discourse of the imagination, where the real is configured in forms of the uncanny, the monstrous or the sacred.

    El sentido de lo real, de lo material —el cuerpo sin vida— como una inextricable parte de lo sagrado, no desaparece del ambiente secular de los siglos XIX y XX. En los relatos analizados en este artículo se estudia cómo en determinadas narrativas humanitarias centradas en la práctica de la autopsia y la momificación, las huellas del catolicismo actúan como una suerte de discurso espectral de la imaginación, en que lo real se configura en formas de lo siniestro, lo monstruoso o lo sagrado.

  6. A high-resolution and intelligent dead pixel detection scheme for an electrowetting display screen

    Science.gov (United States)

    Luo, ZhiJie; Luo, JianKun; Zhao, WenWen; Cao, Yang; Lin, WeiJie; Zhou, GuoFu

    2018-02-01

    Electrowetting display technology is realized by tuning the surface energy of a hydrophobic surface by applying a voltage based on electrowetting mechanism. With the rapid development of the electrowetting industry, how to analyze efficiently the quality of an electrowetting display screen has a very important significance. There are two kinds of dead pixels on the electrowetting display screen. One is that the oil of pixel cannot completely cover the display area. The other is that indium tin oxide semiconductor wire connecting pixel and foil was burned. In this paper, we propose a high-resolution and intelligent dead pixel detection scheme for an electrowetting display screen. First, we built an aperture ratio-capacitance model based on the electrical characteristics of electrowetting display. A field-programmable gate array is used as the integrated logic hub of the system for a highly reliable and efficient control of the circuit. Dead pixels can be detected and displayed on a PC-based 2D graphical interface in real time. The proposed dead pixel detection scheme reported in this work has promise in automating electrowetting display experiments.

  7. Bacteria associated with decomposing dead wood in a natural temperate forest.

    Science.gov (United States)

    Tláskal, Vojtech; Zrustová, Petra; Vrška, Tomáš; Baldrian, Petr

    2017-12-01

    Dead wood represents an important pool of organic matter in forests and is one of the sources of soil formation. It has been shown to harbour diverse communities of bacteria, but their roles in this habitat are still poorly understood. Here, we describe the bacterial communities in the dead wood of Abies alba, Picea abies and Fagus sylvatica in a temperate natural forest in Central Europe. An analysis of environmental factors showed that decomposing time along with pH and water content was the strongest drivers of community composition. Bacterial biomass positively correlated with N content and increased with decomposition along with the concurrent decrease in the fungal/bacterial biomass ratio. Rhizobiales and Acidobacteriales were abundant bacterial orders throughout the whole decay process, but many bacterial taxa were specific either for young (<15 years) or old dead wood. During early decomposition, bacterial genera able to fix N2 and to use simple C1 compounds (e.g. Yersinia and Methylomonas) were frequent, while wood in advanced decay was rich in taxa typical of forest soils (e.g. Bradyrhizobium and Rhodoplanes). Although the bacterial contribution to dead wood turnover remains unclear, the community composition appears to reflect the changing conditions of the substrate and suggests broad metabolic capacities of its members. © FEMS 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Dead Sea deep cores: A window into past climate and seismicity

    Science.gov (United States)

    Stein, Mordechai; Ben-Avraham, Zvi; Goldstein, Steven L.

    2011-12-01

    The area surrounding the Dead Sea was the locus of humankind's migration out of Africa and thus has been the home of peoples since the Stone Age. For this reason, understanding the climate and tectonic history of the region provides valuable insight into archaeology and studies of human history and helps to gain a better picture of future climate and tectonic scenarios. The deposits at the bottom of the Dead Sea are a geological archive of the environmental conditions (e.g., rains, floods, dust storms, droughts) during ice ages and warm ages, as well as of seismic activity in this key region. An International Continental Scientific Drilling Program (ICDP) deep drilling project was performed in the Dead Sea between November 2010 and March 2011. The project was funded by the ICDP and agencies in Israel, Germany, Japan, Norway, Switzerland, and the United States. Drilling was conducted using the new Large Lake Drilling Facility (Figure 1), a barge with a drilling rig run by DOSECC, Inc. (Drilling, Observation and Sampling of the Earth's Continental Crust), a nonprofit corporation dedicated to advancing scientific drilling worldwide. The main purpose of the project was to recover a long, continuous core to provide a high resolution record of the paleoclimate, paleoenvironment, paleoseismicity, and paleomagnetism of the Dead Sea Basin. With this, scientists are beginning to piece together a record of the climate and seismic history of the Middle East during the past several hundred thousand years in millennial to decadal to annual time resolution.

  9. Evolution of genomic diversity and sex at extreme environments: Fungal life under hypersaline Dead Sea stress

    Science.gov (United States)

    Kis-Papo, Tamar; Kirzhner, Valery; Wasser, Solomon P.; Nevo, Eviatar

    2003-01-01

    We have found that genomic diversity is generally positively correlated with abiotic and biotic stress levels (1–3). However, beyond a high-threshold level of stress, the diversity declines to a few adapted genotypes. The Dead Sea is the harshest planetary hypersaline environment (340 g·liter–1 total dissolved salts, ≈10 times sea water). Hence, the Dead Sea is an excellent natural laboratory for testing the “rise and fall” pattern of genetic diversity with stress proposed in this article. Here, we examined genomic diversity of the ascomycete fungus Aspergillus versicolor from saline, nonsaline, and hypersaline Dead Sea environments. We screened the coding and noncoding genomes of A. versicolor isolates by using >600 AFLP (amplified fragment length polymorphism) markers (equal to loci). Genomic diversity was positively correlated with stress, culminating in the Dead Sea surface but dropped drastically in 50- to 280-m-deep seawater. The genomic diversity pattern paralleled the pattern of sexual reproduction of fungal species across the same southward gradient of increasing stress in Israel. This parallel may suggest that diversity and sex are intertwined intimately according to the rise and fall pattern and adaptively selected by natural selection in fungal genome evolution. Future large-scale verification in micromycetes will define further the trajectories of diversity and sex in the rise and fall pattern. PMID:14645702

  10. Minimalism in the modern short story

    Directory of Open Access Journals (Sweden)

    A Razi

    2009-09-01

    Full Text Available Short story has recently become the focus of attention in the late decades in Iran. The expanding value of writing short story is actually a reasonable outcome of the dominance of minimalism- a movement which is based upon simplicity and shortness. Minimalist writers, leaving out redundant features of narration, mainly focus on essentialities through applying a variety of techniques such as cuttings from the interesting moments of real life, evading introduction, applying inter-referents, choice of words, short stanzas and sentences and so on. Looking upon critic’s opinion about such a tendency over the past and present, this article will come up with a brief explanation of the properties of such stories. Finally a sample story “candles will never go dead” will be analyzed and discussed in the lights of such techniques.

  11. Minimization of spin-lattice relaxation time with highly viscous solvents for acquisition of natural abundance nitrogen-15 and silicon-29 nuclear magnetic resonance spectra

    International Nuclear Information System (INIS)

    Bammel, B.P.; Evilia, R.F.

    1982-01-01

    The use of high viscosity solution conditions to decrease T 1 of 15 N and 29 Si nuclei so that natural abundance NMR spectra can be acquired in reasonable times is illustrated. Significant T 1 decreases with negligible increases in peak width are observed. No spectral shifts are observed in any of the cases studied. Highly viscous solutions are produced by using glycerol as a solvent for water-soluble molecules and a mixed solvent consisting of toluene saturated with polystyrene for organic-soluble molecules. The microviscosity in the latter solvent is found to be much less than the observed macroviscosity. Hydrogen bonding of glycerol to the NH 2 of 2-aminopyridine results in a greater than predicted decrease in T 1 for this nitrogen. The technique appears to be a useful alternative to paramagnetic relaxation reagents

  12. Late-time cosmological evolution of a general class of f(R, T) gravity with minimal curvature-matter coupling

    Energy Technology Data Exchange (ETDEWEB)

    Shabani, Hamid [University of Sistan and Baluchestan, Physics Department, Faculty of Sciences, Zahedan (Iran, Islamic Republic of); Ziaie, Amir Hadi [Islamic Azad University, Department of Physics, Kahnooj Branch, Kerman (Iran, Islamic Republic of)

    2017-08-15

    In this work, we study the late-time cosmological solutions of f(R, T) = g(R) + h(-T) models assuming that the conservation of the energy-momentum tensor (EMT) is violated. We perform our analysis through constructing an autonomous dynamical system for the equations of motion. We study the stability properties of solutions via considering linear perturbations about the related equilibrium points. Moreover, we parameterize the Lagrangian by introducing the parameters m(r) and n(s). These parameters which are constructed out of the functions g(R) and h(-T) play the main role in finding the late-time behavior of the solutions. We find that there exist, in general, three classes of solutions; all models with n > 0 include a proper transition from a prolonged matter era to a de Sitter solution. Models with -0.5 < n < 0 and n{sup '} > 1, for at least a root of equation n(s) = s - 1, include an unphysical dark energy solution preceding an improper matter era. Finally, for n < -1/2 there is a transient accelerated expansion era with -1/2 < w{sup (eff)} < -1/3 before a de Sitter phase. For all cases, in order to have a long enough matter dominated epoch, the condition m{sup '} → 0{sup +} for r

  13. Off-Stream Watering Systems and Partial Barriers as a Strategy to Maximize Cattle Production and Minimize Time Spent in the Riparian Area.

    Science.gov (United States)

    Rawluk, Ashley A; Crow, Gary; Legesse, Getahun; Veira, Douglas M; Bullock, Paul R; González, Luciano A; Dubois, Melanie; Ominski, Kim H

    2014-10-29

    A study was conducted in 2009 at two locations in Manitoba (Killarney and Souris), Canada to determine the impact of off-stream waterers (OSW) with or without natural barriers on (i) amount of time cattle spent in the 10 m buffer created within the riparian area, referred to as the riparian polygon (RP), (ii) watering location (OSW or stream), and (iii) animal performance measured as weight gain. This study was divided into three 28-day periods over the grazing season. At each location, the pasture-which ranged from 21.0 ha to 39.2 ha in size-was divided into three treatments: no OSW nor barriers (1CONT), OSW with barriers along the stream bank to deter cattle from watering at the stream (2BARR), and OSW without barriers (3NOBARR). Cattle in 2BARR spent less time in the RP in Periods 1 (p = 0.0002), 2 (p = 0.1116), and 3 (p natural barriers on deterring cattle from the riparian area between periods and locations may be partly attributable to the environmental conditions present during this field trial as well as difference in pasture size and the ability of the established barriers to deter cattle from using the stream as a water source. Treatment had no significant effect (p > 0.05) on cow and calf weights averaged over the summer period. These results indicate that the presence of an OSW does not create significant differences in animal performance when used in extensive pasture scenarios such as those studied within the present study. Whereas the barriers did not consistently discourage watering at the stream, the results provide some indication of the efficacy of the OSW as well as the natural barriers on deterring cattle from the riparian area.

  14. Investigation of the purging effect on a dead-end anode PEM fuel cell-powered vehicle during segments of a European driving cycle

    International Nuclear Information System (INIS)

    Gomez, Alberto; Sasmito, Agus P.; Shamim, Tariq

    2015-01-01

    Highlights: • Experimental study of a dead-end anode PEM fuel cell stack during a driving cycle. • Low purging duration is preferred at high current. • High purging frequency can sustain a better performance over time. • Lower cathode stoichiometry is preferred to minimize the parasitic loads. - Abstract: The dynamic performance of the PEM fuel cell is one of the key factors for successful operation of a fuel cell-powered vehicle. Maintaining fast time response while keeping stable and high stack performance is of importance, especially during acceleration and deceleration. In this paper, we evaluate the transient response of a PEM fuel cell stack with a dead-end anode during segments of a legislated European driving cycle together with the effect of purging factors. The PEM fuel cell stack comprises of 24 cells with a 300 cm"2 active catalyst area and operates at a low hydrogen and air pressure. Humidified air is supplied to the cathode side and the dry hydrogen is fed to the anode. The liquid coolant is circulated to the stack and the radiator to maintain the thermal envelope throughout the stack. The stack performance deterioration over time is prevented by utilizing the purging, which removes the accumulated water and impurities. The effect of purging period, purging duration, coolant flow rate and cathode stoichiometry are examined with regard to the fuel cell’s transient performance during the driving cycle. The results show that a low purging duration may avoid the undesired deceleration at a high current, and a high purging period may sustain a better performance over time. Moreover, the coolant flow rate is found to be an important parameter, which affects the stack temperature–time response of the cooling control and the stack performance, especially at high operating currents.

  15. Normalization with Corresponding Naïve Tissue Minimizes Bias Caused by Commercial Reverse Transcription Kits on Quantitative Real-Time PCR Results.

    Directory of Open Access Journals (Sweden)

    Andreas Garcia-Bardon

    Full Text Available Real-time reverse transcription polymerase chain reaction (PCR is the gold standard for expression analysis. Designed to improve reproducibility and sensitivity, commercial kits are commonly used for the critical step of cDNA synthesis. The present study was designed to determine the impact of these kits. mRNA from mouse brains were pooled to create serial dilutions ranging from 0.0625 μg to 2 μg, which were transcribed into cDNA using four different commercial reverse-transcription kits. Next, we transcribed mRNA from brain tissue after acute brain injury and naïve mice into cDNA for qPCR. Depending on tested genes, some kits failed to show linear results in dilution series and revealed strong variations in cDNA yield. Absolute expression data in naïve and trauma settings varied substantially between these kits. Normalization with a housekeeping gene failed to reduce kit-dependent variations, whereas normalization eliminated differences when naïve samples from the same region were used. The study shows strong evidence that choice of commercial cDNA synthesis kit has a major impact on PCR results and, consequently, on comparability between studies. Additionally, it provides a solution to overcome this limitation by normalization with data from naïve samples. This simple step helps to compare mRNA expression data between different studies and groups.

  16. A new model predictive control algorithm by reducing the computing time of cost function minimization for NPC inverter in three-phase power grids.

    Science.gov (United States)

    Taheri, Asghar; Zhalebaghi, Mohammad Hadi

    2017-11-01

    This paper presents a new control strategy based on finite-control-set model-predictive control (FCS-MPC) for Neutral-point-clamped (NPC) three-level converters. Containing some advantages like fast dynamic response, easy inclusion of constraints and simple control loop, makes the FCS-MPC method attractive to use as a switching strategy for converters. However, the large amount of required calculations is a problem in the widespread of this method. In this way, to resolve this problem this paper presents a modified method that effectively reduces the computation load compare with conventional FCS-MPC method and at the same time does not affect on control performance. The proposed method can be used for exchanging power between electrical grid and DC resources by providing active and reactive power compensations. Experiments on three-level converter for three Power Factor Correction (PFC), inductive and capacitive compensation modes verify the good and comparable performance. The results have been simulated using MATLAB/SIMULINK software. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Are We the Walking Dead? Burnout as Zombie Apocalypse.

    Science.gov (United States)

    Doolittle, Benjamin R

    2016-11-01

    The Walking Dead , one of the most popular television shows in recent history, uses the plot of a zombie apocalypse as a lens into exploring the human condition. Amidst a particularly dangerous moment, the show's hero references the human struggle to survive by remarking, " We are the walking dead." This offhand comment sheds light upon physicians' struggles in medicine, in particular the high prevalence of burnout and the challenge to cultivate compassion and meaning. This is an important question for our age and for our profession. Are we the walking dead? © 2016 Annals of Family Medicine, Inc.

  18. Development of a Real-Time PCR Protocol Requiring Minimal Handling for Detection of Vancomycin-Resistant Enterococci with the Fully Automated BD Max System.

    Science.gov (United States)

    Dalpke, Alexander H; Hofko, Marjeta; Zimmermann, Stefan

    2016-09-01

    Vancomycin-resistant enterococci (VRE) are an important cause of health care-associated infections, resulting in significant mortality and a significant economic burden in hospitals. Active surveillance for at-risk populations contributes to the prevention of infections with VRE. The availability of a combination of automation and molecular detection procedures for rapid screening would be beneficial. Here, we report on the development of a laboratory-developed PCR for detection of VRE which runs on the fully automated Becton Dickinson (BD) Max platform, which combines DNA extraction, PCR setup, and real-time PCR amplification. We evaluated two protocols: one using a liquid master mix and the other employing commercially ordered dry-down reagents. The BD Max VRE PCR was evaluated in two rounds with 86 and 61 rectal elution swab (eSwab) samples, and the results were compared to the culture results. The sensitivities of the different PCR formats were 84 to 100% for vanA and 83.7 to 100% for vanB; specificities were 96.8 to 100% for vanA and 81.8 to 97% for vanB The use of dry-down reagents and the ExK DNA-2 kit for extraction showed that the samples were less inhibited (3.3%) than they were by the use of the liquid master mix (14.8%). Adoption of a cutoff threshold cycle of 35 for discrimination of vanB-positive samples allowed an increase of specificity to 87.9%. The performance of the BD Max VRE assay equaled that of the BD GeneOhm VanR assay, which was run in parallel. The use of dry-down reagents simplifies the assay and omits any need to handle liquid PCR reagents. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  19. Minimizing Exposure at Work

    Science.gov (United States)

    ; Environment Human Health Animal Health Safe Use Practices Food Safety Environment Air Water Soil Wildlife Home Page Pesticide Health and Safety Information Safe Use Practices Minimizing Exposure at Work Pesticides - Pennsylvania State University Cooperative Extension Personal Protective Equipment for Working

  20. Minimalism. Clip and Save.

    Science.gov (United States)

    Hubbard, Guy

    2002-01-01

    Provides background information on the art movement called "Minimalism" discussing why it started and its characteristics. Includes learning activities and information on the artist, Donald Judd. Includes a reproduction of one of his art works and discusses its content. (CMK)

  1. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail

    2011-10-30

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.

  2. Minimal and careful processing

    OpenAIRE

    Nielsen, Thorkild

    2004-01-01

    In several standards, guidelines and publications, organic food processing is strongly associated with "minimal processing" and "careful processing". The term "minimal processing" is nowadays often used in the general food processing industry and described in literature. The term "careful processing" is used more specifically within organic food processing but is not yet clearly defined. The concept of carefulness seems to fit very well with the processing of organic foods, especially if it i...

  3. Dead zone area at the downstream flow of barrages

    Directory of Open Access Journals (Sweden)

    Mohamed F. Sauida

    2016-12-01

    Full Text Available Flow separation is a natural phenomenon encountered at some cases downstream of barrages. The main flow is divided into current and dead zone flows. The percentage area of dead zone flow must be taken into consideration downstream of barrages, due to its negative effect on flow characteristics. Experimental studies were conducted in the Hydraulic Research Institute (HRI, on a physical regulator model with five vents. Theoretically the separation zone is described as a part of an ellipse which is practically verified by plotting velocity vectors. The results show that the percentage area of dead zone to the area through length of separation depends mainly on the expansion ratio [channel width to width of opened vents], with maximum value of 81% for operated side gates. A statistical analysis was derived, to predict the percentage area of dead zone flow to the area through length of separation.

  4. Using stochastic cell division and death to probe minimal units of cellular replication

    Science.gov (United States)

    Chib, Savita; Das, Suman; Venkatesan, Soumya; Sai Narain Seshasayee, Aswin; Thattai, Mukund

    2018-03-01

    The invariant cell initiation mass measured in bacterial growth experiments has been interpreted as a minimal unit of cellular replication. Here we argue that the existence of such minimal units induces a coupling between the rates of stochastic cell division and death. To probe this coupling we tracked live and dead cells in Escherichia coli populations treated with a ribosome-targeting antibiotic. We find that the growth exponent from macroscopic cell growth or decay measurements can be represented as the difference of microscopic first-order cell division and death rates. The boundary between cell growth and decay, at which the number of live cells remains constant over time, occurs at the minimal inhibitory concentration (MIC) of the antibiotic. This state appears macroscopically static but is microscopically dynamic: division and death rates exactly cancel at MIC but each is remarkably high, reaching 60% of the antibiotic-free division rate. A stochastic model of cells as collections of minimal replicating units we term ‘widgets’ reproduces both steady-state and transient features of our experiments. Sub-cellular fluctuations of widget numbers stochastically drive each new daughter cell to one of two alternate fates, division or death. First-order division or death rates emerge as eigenvalues of a stationary Markov process, and can be expressed in terms of the widget’s molecular properties. High division and death rates at MIC arise due to low mean and high relative fluctuations of widget number. Isolating cells at the threshold of irreversible death might allow molecular characterization of this minimal replication unit.

  5. The dead donor rule, voluntary active euthanasia, and capital punishment.

    Science.gov (United States)

    Coons, Christian; Levin, Noah

    2011-06-01

    We argue that the dead donor rule, which states that multiple vital organs should only be taken from dead patients, is justified neither in principle nor in practice. We use a thought experiment and a guiding assumption in the literature about the justification of moral principles to undermine the theoretical justification for the rule. We then offer two real world analogues to this thought experiment, voluntary active euthanasia and capital punishment, and argue that the moral permissibility of terminating any patient through the removal of vital organs cannot turn on whether or not the practice violates the dead donor rule. Next, we consider practical justifications for the dead donor rule. Specifically, we consider whether there are compelling reasons to promulgate the rule even though its corresponding moral principle is not theoretically justified. We argue that there are no such reasons. In fact, we argue that promulgating the rule may actually decrease public trust in organ procurement procedures and medical institutions generally - even in states that do not permit capital punishment or voluntary active euthanasia. Finally, we examine our case against the dead donor rule in the light of common arguments for it. We find that these arguments are often misplaced - they do not support the dead donor rule. Instead, they support the quite different rule that patients should not be killed for their vital organs.

  6. Waste minimization assessment procedure

    International Nuclear Information System (INIS)

    Kellythorne, L.L.

    1993-01-01

    Perry Nuclear Power Plant began developing a waste minimization plan early in 1991. In March of 1991 the plan was documented following a similar format to that described in the EPA Waste Minimization Opportunity Assessment Manual. Initial implementation involved obtaining management's commitment to support a waste minimization effort. The primary assessment goal was to identify all hazardous waste streams and to evaluate those streams for minimization opportunities. As implementation of the plan proceeded, non-hazardous waste streams routinely generated in large volumes were also evaluated for minimization opportunities. The next step included collection of process and facility data which would be useful in helping the facility accomplish its assessment goals. This paper describes the resources that were used and which were most valuable in identifying both the hazardous and non-hazardous waste streams that existed on site. For each material identified as a waste stream, additional information regarding the materials use, manufacturer, EPA hazardous waste number and DOT hazard class was also gathered. Once waste streams were evaluated for potential source reduction, recycling, re-use, re-sale, or burning for heat recovery, with disposal as the last viable alternative

  7. Online Robot Dead Reckoning Localization Using Maximum Relative Entropy Optimization With Model Constraints

    International Nuclear Information System (INIS)

    Urniezius, Renaldas

    2011-01-01

    The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigid body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.

  8. Minimizing time for test in integrated circuit

    OpenAIRE

    Andonova, A. S.; Dimitrov, D. G.; Atanasova, N. G.

    2004-01-01

    The cost for testing integrated circuits represents a growing percentage of the total cost for their production. The former strictly depends on the length of the test session, and its reduction has been the target of many efforts in the past. This paper proposes a new method for reducing the test length by adopting a new architecture and exploiting an evolutionary optimisation algorithm. A prototype of the proposed approach was tested on 1SCAS standard benchmarks and theexperimental results s...

  9. MINIMIZATION OF RETRIEVAL TIME DURING SOFTWARE REUSE ...

    African Journals Online (AJOL)

    eobe

    Versions. Label in repository. No. of classifiers in class diagrams. No. of sequence diagrams. No. of messages in all sequence diagrams. Java Game. Maker. (JGM) game engine for developing java games. 1.9, 2.1, 2.2, ... to programming, code-based sizing metrics will be used to estimate reuse effort. The formula employed.

  10. Reduction of the duration of the natural dead time of a 4 {pi} gas ionization counter operating in geiger-muller; Reduction de la duree de l'etat d'insensibilite naturelle d'un compteur a ionisation gazeuse de geometrie 4 {pi} travaillant en regime de Geiger-Muller

    Energy Technology Data Exchange (ETDEWEB)

    Becker, A [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires, Departement d' Electronique Generale, Laboratoire de Mesure des Radioelements

    1967-07-01

    It Is only possible to benefit from the 100 per cent efficiency of a 4 {pi} gas-ionisation detector operating in the Geiger-Muller regime (an efficiency which always appears difficult to attain in other detectors) if, one is able: - on the one hand to reduce considerably - on the other hand to fix very precisely the length at the real dead time of the system operating in the above conditions. Taking into account both the phenomena characteristic of the discharge in the regime under consideration, phenomena whose properties are described, and the geometrical conditions depending on the operation over a solid angle of 4 {pi}, it is shown that with an exterior electronic system for cutting off the discharge for example in 30 nanoseconds, absolute 4 {pi} G.M. measurements would become competitive with those now carried out almost in proportional conditions. Measurement results on sources of 5 x 10{sup 3} emissions per second maximum obtained by the use of a system for which the interval between the passage of a particle and the moment when the device has been made insensitive is still 60 nanoseconds have made it possible to confirm these statements. (author) [French] De l'efficacite de 100 pour cent du detecteur a ionisation gazeuse travaillant en regime de Geiger-Muller en geometrie 4 {pi} (efficacite toujours difficile a atteindre par d'autres detecteurs semble-t-il) on ne peut cependant beneficier que si l'on est capable: - d'une part, de reduire considerablement - d'autre part, de fixer de maniere tres precise la duree d'insensibilite reelle du systeme travaillant dans les conditions ci-dessus. Tenant compte des phenomenes propres a la decharge, dans le regime en question, phenomenes dont on rappeUto quelques caracteristiques, et des conditions geometriques propres a une detection dans un angle solide de 4 {pi} on montre qu'avec un systeme electronique de coupure externe de la decharge qui arreterait cette derniere en 30 nanosecondes par exemple, des mesures

  11. Reduction of the duration of the natural dead time of a 4 {pi} gas ionization counter operating in geiger-muller; Reduction de la duree de l'etat d'insensibilite naturelle d'un compteur a ionisation gazeuse de geometrie 4 {pi} travaillant en regime de Geiger-Muller

    Energy Technology Data Exchange (ETDEWEB)

    Becker, A. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires, Departement d' Electronique Generale, Laboratoire de Mesure des Radioelements

    1967-07-01

    It Is only possible to benefit from the 100 per cent efficiency of a 4 {pi} gas-ionisation detector operating in the Geiger-Muller regime (an efficiency which always appears difficult to attain in other detectors) if, one is able: - on the one hand to reduce considerably - on the other hand to fix very precisely the length at the real dead time of the system operating in the above conditions. Taking into account both the phenomena characteristic of the discharge in the regime under consideration, phenomena whose properties are described, and the geometrical conditions depending on the operation over a solid angle of 4 {pi}, it is shown that with an exterior electronic system for cutting off the discharge for example in 30 nanoseconds, absolute 4 {pi} G.M. measurements would become competitive with those now carried out almost in proportional conditions. Measurement results on sources of 5 x 10{sup 3} emissions per second maximum obtained by the use of a system for which the interval between the passage of a particle and the moment when the device has been made insensitive is still 60 nanoseconds have made it possible to confirm these statements. (author) [French] De l'efficacite de 100 pour cent du detecteur a ionisation gazeuse travaillant en regime de Geiger-Muller en geometrie 4 {pi} (efficacite toujours difficile a atteindre par d'autres detecteurs semble-t-il) on ne peut cependant beneficier que si l'on est capable: - d'une part, de reduire considerablement - d'autre part, de fixer de maniere tres precise la duree d'insensibilite reelle du systeme travaillant dans les conditions ci-dessus. Tenant compte des phenomenes propres a la decharge, dans le regime en question, phenomenes dont on rappeUto quelques caracteristiques, et des conditions geometriques propres a une detection dans un angle solide de 4 {pi} on montre qu'avec un systeme electronique de coupure externe de la decharge qui arreterait cette derniere en 30

  12. Minimal quantization and confinement

    International Nuclear Information System (INIS)

    Ilieva, N.P.; Kalinowskij, Yu.L.; Nguyen Suan Han; Pervushin, V.N.

    1987-01-01

    A ''minimal'' version of the Hamiltonian quantization based on the explicit solution of the Gauss equation and on the gauge-invariance principle is considered. By the example of the one-particle Green function we show that the requirement for gauge invariance leads to relativistic covariance of the theory and to more proper definition of the Faddeev - Popov integral that does not depend on the gauge choice. The ''minimal'' quantization is applied to consider the gauge-ambiguity problem and a new topological mechanism of confinement

  13. Minimal Composite Inflation

    DEFF Research Database (Denmark)

    Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco

    2011-01-01

    We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the u......We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity...

  14. Minimalism and Speakers’ Intuitions

    Directory of Open Access Journals (Sweden)

    Matías Gariazzo

    2011-08-01

    Full Text Available Minimalism proposes a semantics that does not account for speakers’ intuitions about the truth conditions of a range of sentences or utterances. Thus, a challenge for this view is to offer an explanation of how its assignment of semantic contents to these sentences is grounded in their use. Such an account was mainly offered by Soames, but also suggested by Cappelen and Lepore. The article criticizes this explanation by presenting four kinds of counterexamples to it, and arrives at the conclusion that minimalism has not successfully answered the above-mentioned challenge.

  15. Minimal open strings

    International Nuclear Information System (INIS)

    Hosomichi, Kazuo

    2008-01-01

    We study FZZT-branes and open string amplitudes in (p, q) minimal string theory. We focus on the simplest boundary changing operators in two-matrix models, and identify the corresponding operators in worldsheet theory through the comparison of amplitudes. Along the way, we find a novel linear relation among FZZT boundary states in minimal string theory. We also show that the boundary ground ring is realized on physical open string operators in a very simple manner, and discuss its use for perturbative computation of higher open string amplitudes.

  16. 3D Imaging of Dead Sea Area Using Weighted Multipath Summation: A Case Study

    Directory of Open Access Journals (Sweden)

    Shemer Keydar

    2013-01-01

    Full Text Available The formation of sinkholes along the Dead Sea is caused by the rapid decline of the Dead Sea level, as a possible result of human extensive activity. According to one of the geological models, the sinkholes in several sites are clustered along a narrow coastal strip developing along lineaments representing faults in NNW direction. In order to understand the relationship between a developing sinkhole and its tectonic environment, a high-resolution (HR three-dimensional (3D seismic reflection survey was carried out at the western shoreline of the Dead Sea. A recently developed 3D imaging approach was applied to this 3D dataset. Imaging of subsurface is performed by a spatial summation of seismic waves along time surfaces using recently proposed multipath summation with proper weights. The multipath summation is performed by stacking the target waves along all possible time surfaces having a common apex at the given point. This approach does not require any explicit information on parameters since the involved multipath summation is performed for all possible parameters values within a wide specified range. The results from processed 3D time volume show subhorizontal coherent reflectors at approximate depth of 50–80 m which incline on closer location to the exposed sinkhole and suggest a possible linkage between revealed fault and the sinkholes.

  17. New perspectives on interdisciplinary earth science at the Dead Sea: The DESERVE project.

    Science.gov (United States)

    Kottmeier, Christoph; Agnon, Amotz; Al-Halbouni, Djamil; Alpert, Pinhas; Corsmeier, Ulrich; Dahm, Torsten; Eshel, Adam; Geyer, Stefan; Haas, Michael; Holohan, Eoghan; Kalthoff, Norbert; Kishcha, Pavel; Krawczyk, Charlotte; Lati, Joseph; Laronne, Jonathan B; Lott, Friederike; Mallast, Ulf; Merz, Ralf; Metzger, Jutta; Mohsen, Ayman; Morin, Efrat; Nied, Manuela; Rödiger, Tino; Salameh, Elias; Sawarieh, Ali; Shannak, Benbella; Siebert, Christian; Weber, Michael

    2016-02-15

    The Dead Sea region has faced substantial environmental challenges in recent decades, including water resource scarcity, ~1m annual decreases in the water level, sinkhole development, ascending-brine freshwater pollution, and seismic disturbance risks. Natural processes are significantly affected by human interference as well as by climate change and tectonic developments over the long term. To get a deep understanding of processes and their interactions, innovative scientific approaches that integrate disciplinary research and education are required. The research project DESERVE (Helmholtz Virtual Institute Dead Sea Research Venue) addresses these challenges in an interdisciplinary approach that includes geophysics, hydrology, and meteorology. The project is implemented by a consortium of scientific institutions in neighboring countries of the Dead Sea (Israel, Jordan, Palestine Territories) and participating German Helmholtz Centres (KIT, GFZ, UFZ). A new monitoring network of meteorological, hydrological, and seismic/geodynamic stations has been established, and extensive field research and numerical simulations have been undertaken. For the first time, innovative measurement and modeling techniques have been applied to the extreme conditions of the Dead Sea and its surroundings. The preliminary results show the potential of these methods. First time ever performed eddy covariance measurements give insight into the governing factors of Dead Sea evaporation. High-resolution bathymetric investigations reveal a strong correlation between submarine springs and neo-tectonic patterns. Based on detailed studies of stratigraphy and borehole information, the extension of the subsurface drainage basin of the Dead Sea is now reliably estimated. Originality has been achieved in monitoring flash floods in an arid basin at its outlet and simultaneously in tributaries, supplemented by spatio-temporal rainfall data. Low-altitude, high resolution photogrammetry, allied to

  18. Minimal model holography

    International Nuclear Information System (INIS)

    Gaberdiel, Matthias R; Gopakumar, Rajesh

    2013-01-01

    We review the duality relating 2D W N minimal model conformal field theories, in a large-N ’t Hooft like limit, to higher spin gravitational theories on AdS 3 . This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Higher spin theories and holography’. (review)

  19. Minimal constrained supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Cribiori, N. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Dall' Agata, G., E-mail: dallagat@pd.infn.it [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Farakos, F. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Porrati, M. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)

    2017-01-10

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  20. Hazardous waste minimization

    International Nuclear Information System (INIS)

    Freeman, H.

    1990-01-01

    This book presents an overview of waste minimization. Covers applications of technology to waste reduction, techniques for implementing programs, incorporation of programs into R and D, strategies for private industry and the public sector, and case studies of programs already in effect

  1. Minimally invasive distal pancreatectomy

    NARCIS (Netherlands)

    Røsok, Bård I.; de Rooij, Thijs; van Hilst, Jony; Diener, Markus K.; Allen, Peter J.; Vollmer, Charles M.; Kooby, David A.; Shrikhande, Shailesh V.; Asbun, Horacio J.; Barkun, Jeffrey; Besselink, Marc G.; Boggi, Ugo; Conlon, Kevin; Han, Ho Seong; Hansen, Paul; Kendrick, Michael L.; Kooby, David; Montagnini, Andre L.; Palanivelu, Chinnasamy; Wakabayashi, Go; Zeh, Herbert J.

    2017-01-01

    The first International conference on Minimally Invasive Pancreas Resection was arranged in conjunction with the annual meeting of the International Hepato-Pancreato-Biliary Association (IHPBA), in Sao Paulo, Brazil on April 19th 2016. The presented evidence and outcomes resulting from the session

  2. Minimal DBM Substraction

    DEFF Research Database (Denmark)

    David, Alexandre; Håkansson, John; G. Larsen, Kim

    In this paper we present an algorithm to compute DBM substractions with a guaranteed minimal number of splits and disjoint DBMs to avoid any redundance. The substraction is one of the few operations that result in a non-convex zone, and thus, requires splitting. It is of prime importance to reduce...

  3. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  4. Minimal constrained supergravity

    International Nuclear Information System (INIS)

    Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.

    2017-01-01

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  5. Options for reducing HIV transmission related to the dead space in needles and syringes.

    Science.gov (United States)

    Zule, William A; Pande, Poonam G; Otiashvili, David; Bobashev, Georgiy V; Friedman, Samuel R; Gyarmathy, V Anna; Des Jarlais, Don C

    2018-01-15

    When shared by people who inject drugs, needles and syringes with different dead space may affect the probability of HIV and hepatitis C virus (HCV) transmission differently. We measured dead space in 56 needle and syringe combinations obtained from needle and syringe programs across 17 countries in Europe and Asia. We also calculated the amounts of blood and HIV that would remain in different combinations following injection and rinsing. Syringe barrel capacities ranged from 0.5 to 20 mL. Needles ranged in length from 8 to 38 mm. The average dead space was 3 μL in low dead space syringes with permanently attached needles, 13 μL in high dead space syringes with low dead space needles, 45 μL in low dead space syringes with high dead space needles, and 99 μL in high dead space syringes with high dead space needles. Among low dead space designs, calculated volumes of blood and HIV viral burden were lowest for low dead space syringes with permanently attached needles and highest for low dead space syringes with high dead space needles. The dead space in different low dead space needle and syringe combinations varied substantially. To reduce HIV transmission related to syringe sharing, needle and syringe programs need to combine this knowledge with the needs of their clients.

  6. Effects of dead load on ductility of a floor system

    International Nuclear Information System (INIS)

    Fujisaki, E.; Sarkar, B.E.; Ho, H.; Reed, J.W.

    1993-01-01

    In seismic margin or seismic fragility calculations, the ductility scale factor F μ is often used to quantify the effect of inelastic energy absorption on structural capacity. In concept, the ductility scale factor can be thought of as a response spectrum reduction factor. For a given ductile structural element and input response spectrum, the product of F μ and the factor of safety against yield (F s ) provides a measure of the total factor of safety against failure (F). Testing and analytical studies by others have shown that structures such as shear walls and building frames (mounted vertically) subjected to horizontal input motions are capable of absorbing earthquake energy through inelastic behavior. Kennedy, 1984, Riddell, 1979, and Reed, 1991 studied the ductility scale factor and developed simplified procedures through the use of nonlinear analyses. For floor systems (mounted horizontally), we are mainly interested in the response to vertical input motions. Because of the constant downward pull of gravity, the nonlinear displacement of a floor structure is biased downward. This ratcheting phenomenon reduces the ductility scale factor for a horizontal element compared to the case where the same element is mounted vertically and is subjected to horizontal input motion. Through the use of nonlinear time history analyses, we investigated the effects of dead loads on the ductility scale factor of floor systems. We also developed a simple modification to the Riddell-Newmark procedure (Riddell, 1979), which is used to calculate the ductility scale factor for vertically mounted elements, to determine F μ for horizontally mounted elements

  7. Dead Sea mud packs for chronic low back pain.

    Science.gov (United States)

    Abu-Shakra, Mahmoud; Mayer, Amit; Friger, Michael; Harari, Marco

    2014-09-01

    Low back pain (LBP) is chronic disease without a curative therapy. Alternative and complementary therapies are widely used in the management of this condition. To evaluate the efficacy of home application of Dead Sea mud compresses to the back of patients with chronic LBP. Forty-six consecutive patients suffering from chronic LBP were recruited. All patients were followed at the Soroka University Rheumatic Diseases Unit. The patients were randomized into two groups: one group was treated with mineral-rich mud compresses, and the other with mineral-depleted compresses. Mud compresses were applied five times a week for 3 consecutive weeks. The primary outcome was the patient's assessment of the overall back pain severity. The score of the Ronald & Morris questionnaire served as a secondary outcome. Forty-four patients completed the therapy and the follow-up assessments: 32 were treated with real mud packs and 12 used the mineral-depleted packs. A significant decrease in intensity of pain, as described by the patients, was observed only in the treatment group. In this group, clinical improvement was clearly seen at completion of therapy and was sustained a month later. Significant improvement in the scores of the Roland & Morris questionnaire was observed in both groups. The data suggest that pain severity was reduced in patients treated with mineral-rich mud compresses compared with those treated with mineral-depleted compresses. Whether this modest effect is the result of a "true" mud effect or other causes can not be determined in this study.

  8. Exotic primitivism of death in classical Hollywood living dead films

    Directory of Open Access Journals (Sweden)

    Outi Hakola

    2012-11-01

    Full Text Available http://dx.doi.org/10.5007/2175-8026.2012n62p219 The classical Hollywood horror exhibited the undead monsters, such as mummies, zombies and vampires, at the time when cultural practices of death and dying were changing in the United States. Consequently, the way death is handled in these films is connected to the ongoing marginalization of death. In the classical films, heroes represent modern, medicalized, scientific and marginalized death. In contrast, the undead represent traditional, or irrational and magical, death. When the heroes hunt down and kill the monsters, they also claim the superiority of modern death. Furthermore, the exclusion of traditional death is highlighted by using (postcolonial tensions. The non-western monsters and realm of the world stand for traditional death and the past whereas western heroes represent modern death and the future. This article concentrates on how the classical living dead films narrate the cultural tension between the waning (traditional and emerging (modern practices of death.

  9. Drivers of CO2 Emission Rates from Dead Wood Logs of 13 Tree Species in the Initial Decomposition Phase

    Directory of Open Access Journals (Sweden)

    Tiemo Kahl

    2015-07-01

    Full Text Available Large dead wood is an important structural component of forest ecosystems and a main component of forest carbon cycles. CO2 emissions from dead wood can be used as a proxy for actual decomposition rates. The main drivers of CO2 emission rates for dead wood of temperate European tree species are largely unknown. We applied a novel, closed chamber measurement technique to 360 dead wood logs of 13 important tree species in three regions in Germany. We found that tree species identity was with 71% independent contribution to the model (R2 = 0.62 the most important driver of volume-based CO2 emission rates, with angiosperms having on average higher rates than conifers. Wood temperature and fungal species richness had a positive effect on CO2 emission rates, whereas wood density had a negative effect. This is the first time that positive fungal species richness—wood decomposition relationship in temperate forests was shown. Certain fungal species were associated with high or low CO2 emission rates. In addition, as indicated by separate models for each tree species, forest management intensity, study region, and the water content as well as C and N concentration of dead wood influenced CO2 emission rates.

  10. Preliminary test results of a flight management algorithm for fuel conservative descents in a time based metered traffic environment. [flight tests of an algorithm to minimize fuel consumption of aircraft based on flight time

    Science.gov (United States)

    Knox, C. E.; Cannon, D. G.

    1979-01-01

    A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.

  11. All Eyes on Egypt: Islam and the Medical Use of Dead Bodies Amidst Cairo's Political Unrest.

    Science.gov (United States)

    Hamdy, Sherine

    2016-01-01

    Using dead bodies for medical purposes has long been considered taboo in Egypt. Public health campaigns, physicians' pleas, and the urgings of religious scholars all failed to alter public opinion regarding the donation of dead bodies either for instructional material or for therapeutic treatments. Yet in 2011, amid revolutionary turmoil in Egypt, a campaign was launched for people to donate their eyes upon death; this time, people readily signed up to be donors. Focusing on mass eye trauma that occurred in Egypt amid the political uprisings of 2011, I raise questions about when and why Islam can explain people's attitudes and behaviors, particularly toward death and medicine. The case of mass eye trauma in Egypt and citizens' reformulations of questions once jealously controlled by state-aligned doctors, politicians, and religious scholars unsettles the boundaries between 'religion' and 'secularism' in medical practice. [Formula: see text].

  12. The Dead Sea, The Lake and Its Setting

    Science.gov (United States)

    Brink, Uri ten

    I cannot think of a subject more befitting the description of interdisciplinary research with societal relevance than the study of the Dead Sea, a terminal lake of the Jordan River in Israel and Jordan. The scientific study of the Dead Sea is intimately connected with politics, religion, archeology, economic development, tourism, and environmental change.The Dead Sea is a relatively closed geologic and limnologic system with drastic physical changes often occurring on human timescales and with a long human history to observe these changes. Research in this unique area covers diverse aspects such as active subsidence and deformation along strike-slip faults; vertical stratification and stability of the water column; physical properties of extremely saline and dense (1234 kg/m3) water; spontaneous precipitation of minerals in an oversaturated environment; origin of the unusual chemical composition of the brine; existence of life in extreme environments; use of lake level fluctuations as a paleoclimatic indicator; and effects on the environment of human intervention versus natural climatic variability. Although the Dead Sea covers a small area on a global scale, it is nevertheless one of the largest natural laboratories for these types of research on Earth. These reasons make the Dead Sea a fascinating topic for the curious mind.

  13. Minimal abdominal incisions

    Directory of Open Access Journals (Sweden)

    João Carlos Magi

    2017-04-01

    Full Text Available Minimally invasive procedures aim to resolve the disease with minimal trauma to the body, resulting in a rapid return to activities and in reductions of infection, complications, costs and pain. Minimally incised laparotomy, sometimes referred to as minilaparotomy, is an example of such minimally invasive procedures. The aim of this study is to demonstrate the feasibility and utility of laparotomy with minimal incision based on the literature and exemplifying with a case. The case in question describes reconstruction of the intestinal transit with the use of this incision. Male, young, HIV-positive patient in a late postoperative of ileotiflectomy, terminal ileostomy and closing of the ascending colon by an acute perforating abdomen, due to ileocolonic tuberculosis. The barium enema showed a proximal stump of the right colon near the ileostomy. The access to the cavity was made through the orifice resulting from the release of the stoma, with a lateral-lateral ileo-colonic anastomosis with a 25 mm circular stapler and manual closure of the ileal stump. These surgeries require their own tactics, such as rigor in the lysis of adhesions, tissue traction, and hemostasis, in addition to requiring surgeon dexterity – but without the need for investments in technology; moreover, the learning curve is reported as being lower than that for videolaparoscopy. Laparotomy with minimal incision should be considered as a valid and viable option in the treatment of surgical conditions. Resumo: Procedimentos minimamente invasivos visam resolver a doença com o mínimo de trauma ao organismo, resultando em retorno rápido às atividades, reduções nas infecções, complicações, custos e na dor. A laparotomia com incisão mínima, algumas vezes referida como minilaparotomia, é um exemplo desses procedimentos minimamente invasivos. O objetivo deste trabalho é demonstrar a viabilidade e utilidade das laparotomias com incisão mínima com base na literatura e

  14. Minimal Walking Technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; A. Ryttov, T.

    2007-01-01

    Different theoretical and phenomenological aspects of the Minimal and Nonminimal Walking Technicolor theories have recently been studied. The goal here is to make the models ready for collider phenomenology. We do this by constructing the low energy effective theory containing scalars......, pseudoscalars, vector mesons and other fields predicted by the minimal walking theory. We construct their self-interactions and interactions with standard model fields. Using the Weinberg sum rules, opportunely modified to take into account the walking behavior of the underlying gauge theory, we find...... interesting relations for the spin-one spectrum. We derive the electroweak parameters using the newly constructed effective theory and compare the results with the underlying gauge theory. Our analysis is sufficiently general such that the resulting model can be used to represent a generic walking technicolor...

  15. Thermodynamic analysis of a Stirling engine including regenerator dead volume

    Energy Technology Data Exchange (ETDEWEB)

    Puech, Pascal; Tishkova, Victoria [Universite de Toulouse, UPS, CNRS, CEMES, 29 rue Jeanne Marvig, F-31055 Toulouse (France)

    2011-02-15

    This paper provides a theoretical investigation on the thermodynamic analysis of a Stirling engine with linear and sinusoidal variations of the volume. The regenerator in a Stirling engine is an internal heat exchanger allowing to reach high efficiency. We used an isothermal model to analyse the net work and the heat stored in the regenerator during a complete cycle. We show that the engine efficiency with perfect regeneration doesn't depend on the regenerator dead volume but this dead volume strongly amplifies the imperfect regeneration effect. An analytical expression to estimate the improvement due to the regenerator has been proposed including the combined effects of dead volume and imperfect regeneration. This could be used at the very preliminary stage of the engine design process. (author)

  16. Love letters to the dead: resurrecting an epistolary art.

    Science.gov (United States)

    Lander, Dorothy A; Graham-Pole, John R

    This article explores the art of letter-writing, specifically to our beloved dead, as a form of autoethnographic research, pedagogy, and care work. As university teachers and qualitative researchers in palliative and end-of-life care, we review the literature and history of epistolary communications with the deceased, as a prelude to writing our own letters. John writes to his long-dead mother and Dorothy to her recently deceased spouse Patrick, each letter followed by a reflective dialogue between us. Through this dialogue, we highlight the potential application of this art, or handcraft, to formal and informal palliative care, and the implications for practice, pedagogy, policy, and research. We propose that such direct, non-mediated, communications can offer a valuable form of healing for bereaved people. The therapeutic potential of letter writing and the abundance of literary and popular culture exemplars of responses from the dead are also largely unexplored in death education and research.

  17. Principle of minimal work fluctuations.

    Science.gov (United States)

    Xiao, Gaoyang; Gong, Jiangbin

    2015-08-01

    Understanding and manipulating work fluctuations in microscale and nanoscale systems are of both fundamental and practical interest. For example, in considering the Jarzynski equality 〈e-βW〉=e-βΔF, a change in the fluctuations of e-βW may impact how rapidly the statistical average of e-βW converges towards the theoretical value e-βΔF, where W is the work, β is the inverse temperature, and ΔF is the free energy difference between two equilibrium states. Motivated by our previous study aiming at the suppression of work fluctuations, here we obtain a principle of minimal work fluctuations. In brief, adiabatic processes as treated in quantum and classical adiabatic theorems yield the minimal fluctuations in e-βW. In the quantum domain, if a system initially prepared at thermal equilibrium is subjected to a work protocol but isolated from a bath during the time evolution, then a quantum adiabatic process without energy level crossing (or an assisted adiabatic process reaching the same final states as in a conventional adiabatic process) yields the minimal fluctuations in e-βW, where W is the quantum work defined by two energy measurements at the beginning and at the end of the process. In the classical domain where the classical work protocol is realizable by an adiabatic process, then the classical adiabatic process also yields the minimal fluctuations in e-βW. Numerical experiments based on a Landau-Zener process confirm our theory in the quantum domain, and our theory in the classical domain explains our previous numerical findings regarding the suppression of classical work fluctuations [G. Y. Xiao and J. B. Gong, Phys. Rev. E 90, 052132 (2014)].

  18. Deadly Choices empowering Indigenous Australians through social networking sites.

    Science.gov (United States)

    McPhail-Bell, Karen; Appo, Nathan; Haymes, Alana; Bond, Chelsea; Brough, Mark; Fredericks, Bronwyn

    2017-04-05

    The potential for health promotion through social networking sites (SNSs) is widely recognized. However, while health promotion prides itself in focusing on the social determinants of health, its partiality for persuading individuals to comply with health behaviours dominates the way health promotion utilizes SNSs. This paper contributes to an understanding of collaborative ways SNSs can work for health promotion agendas of self-determination and empowerment in an Indigenous Australia context. An ethnographic study was undertaken with Deadly Choices, an Indigenous-led health promotion initiative. The study involved participant observation of interactions on Deadly Choices SNSs between Deadly Choices and its online community members. Deadly Choices provides an example of SNSs providing a powerful tool to create a safe, inclusive and positive space for Indigenous people and communities to profile their healthy choices, according to Indigenous notions of health and identity. The study found five principles that underpin Deadly Choices' use of SNSs for health promotion. These are: create a dialogue; build community online and offline; incentivise healthy online engagement; celebrate Indigenous identity and culture; and prioritize partnerships. Deadly Choices SNSs empowers Indigenous people and communities to be health promoters themselves, which represents a power shift from health promotion practitioner to Indigenous people and communities and more broadly, an enactment of Indigenous self-determination on SNSs. Mainstream health promotion can learn from Indigenous health promotion practice regarding the use of SNSs for health promotion agendas. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Microbe Profile: Mycobacterium tuberculosis: Humanity's deadly microbial foe.

    Science.gov (United States)

    Gordon, Stephen V; Parish, Tanya

    2018-04-01

    Mycobacterium tuberculosis is an expert and deadly pathogen, causing the disease tuberculosis (TB) in humans. It has several notable features: the ability to enter non-replicating states for long periods and cause latent infection; metabolic remodelling during chronic infection; a thick, waxy cell wall; slow growth rate in culture; and intrinsic drug resistance and antibiotic tolerance. As a pathogen, M. tuberculosis has a complex relationship with its host, is able to replicate inside macrophages, and expresses diverse immunomodulatory molecules. M. tuberculosis currently causes over 1.8 million deaths a year, making it the world's most deadly human pathogen.

  20. Tourism development challenges on the Dead Sea shore

    Directory of Open Access Journals (Sweden)

    Wendt Jan A.

    2016-12-01

    Full Text Available The Dead Sea along with Jerusalem belongs to one of the most well-known spots visited by tourists in Israel. Because of many factors, such as the water level of the Dead Sea at a depth of 430 m b.s.l. (in 2015, average salinity of 26%, hot springs and many healing salts located there, it is a unique tourist attraction on a global level. Its attractiveness is heightened by its proximity to other sites of interest, such as the Jewish fortress at Masada, Jericho, Qumran, where the Dead Sea Scrolls were found, as well as Petra, Madaba and Al-Karak on the Jordanian side of the Dead Sea. High salinity and a microclimate create perfect conditions for the development of health resorts and medical tourism. Extracting healing salts from its waters for the needs of the chemical industry is important for both the economy and medical tourism. However, as a consequence of the agricultural and urban use of the waters of the River Jordan, which flows into the Dead Sea, a persistent decrease in the lake water level has been observed over the last century. This has created a number of economic and political issues. The problems which still have to be resolved are associated with the Red Sea-Dead Sea Conduit (Canal, the division of Jordan’s water resources, conservation of the unique reservoir of the Dead Sea and the threat of hindering the development of tourism within the region. The presentation of these issues is the main aim of this research paper. The study is based on the analysis of changes in tourism flows, results of research studies and the prognosis of changes in the water level of the Dead Sea. It presents an assessment of the effects of this phenomenon on the tourist economy. At the current level of tourism flows within the region, the tourist capacity of local beaches will be exceeded in areas where the most popular tourist resorts are located. Increased expenditure on development of tourism infrastructure in the coastal zone can also be observed

  1. Preparation of 'dead water' for low background liquid scintillation counting

    International Nuclear Information System (INIS)

    Morishima, Hiroshige; Koga, Taeko; Niwa, Takeo; Kawai, Hiroshi

    1987-01-01

    'Dead water', low level tritiated water is indispensable to measure tritium concentration in environmental waters using a low background liquid scintillation counter. Water produced by combustion of natural gas, or deep sea water etc. are usually used for the above purpose. A new method of reducing tritium concentration in natural water has been introduced for preparation of 'dead water'. This method is to combine hydrogen-oxygen mixture produced by water electrolysis with hopcalite catalyzer at 700 deg C. Deep well water was electrolized up to 2/3 volume, and tritium concentration of recombined water was reduced to be about one third of that of the original one. (author)

  2. THE GROWTH AND MIGRATION OF JOVIAN PLANETS IN EVOLVING PROTOSTELLAR DISKS WITH DEAD ZONES

    International Nuclear Information System (INIS)

    Matsumura, Soko; Pudritz, Ralph E.; Thommes, Edward W.

    2009-01-01

    The growth of Jovian mass planets during migration in their protoplanetary disks is one of the most important problems that needs to be solved in light of observations of the small orbital radii of exosolar planets. Studies of the migration of planets in standard gas disk models routinely show that the migration speeds are too high to form Jovian planets, and that such migrating planetary cores generally plunge into their central stars in less than a million years. In previous work, we have shown that a poorly ionized, less viscous region in a protoplanetary disk called a dead zone slows down the migration of fixed-mass planets. In this paper, we extend our numerical calculations to include dead zone evolution along with the disk, as well as planet formation via accretion of rocky and gaseous materials. Using our symplectic integrator-gas dynamics code, we find that dead zones, even in evolving disks wherein planets grow by accretion as they migrate, still play a fundamental role in saving planetary systems. We demonstrate that Jovian planets form within 2.5 Myr for disks that are 10 times more massive than a minimum-mass solar nebula (MMSN) with an opacity reduction and without slowing down migration artificially. Our simulations indicate that protoplanetary disks with an initial mass comparable to the MMSN only produce Neptunian mass planets. We also find that planet migration does not help core accretion as much in the oligarchic planetesimal-accretion scenario as was expected in the runaway planetesimal-accretion scenario. Therefore, we expect that an opacity reduction (or some other mechanisms) is needed to solve the formation timescale problem even for migrating protoplanets, as long as we consider the oligarchic growth. We also point out a possible role of a dead zone in explaining long-lived, strongly accreting gas disks.

  3. Satisfaction with the organ donation process of brain dead donors' families in Korea.

    Science.gov (United States)

    Kim, H S; Yoo, Y S; Cho, O H

    2014-12-01

    The purpose of this study was to investigate the satisfaction of the families of brain dead donors with regard to donation processes as well as their emotions after the donation. A cross-sectional survey study was performed that included 45 families of brain-dead donors in 1 hospital-based organ procurement organization (HOPO) in Korea between February 2007 and April 2011. Donor willingness and desire in life was the most frequent reason organs were donated (34.5%), followed by the advice of family members or friends (31.0%). Satisfaction with the organ donation processes was 4.04 of 6 points. In each category, the satisfaction with the decision of donation was the highest (4.96 points) and the satisfaction with the procedure of donation was the lowest (3.07 points); of each question, the satisfaction of "information and help on funeral arrangements was enough" and "the process of preparing the relevant documents was cumbersome" was the lowest. "Missing" the dead person and "pride" were the most common emotions experienced after organ donation (69.0% and 62.1%, respectively), followed by "grief," "family coherence," and "guilt." Religious practices were observed to be most helpful for psychological stress relief after donation, followed by spending time with family and friends. Moreover, 24.1% responded that they had not yet overcome their suffering. Because donors' own willingness is the most common reason that families choose donation, it is necessary to remind the public of the importance of organ donation through education and public relations using mass communication approaches. Additionally, because the families felt grief and guilt as well as missing their loved ones and pride regarding their dead loved ones after organ donation, continuous and systematic supports are needed to promote their psychological stability. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Legal incentives for minimizing waste

    International Nuclear Information System (INIS)

    Clearwater, S.W.; Scanlon, J.M.

    1991-01-01

    Waste minimization, or pollution prevention, has become an integral component of federal and state environmental regulation. Minimizing waste offers many economic and public relations benefits. In addition, waste minimization efforts can also dramatically reduce potential criminal requirements. This paper addresses the legal incentives for minimizing waste under current and proposed environmental laws and regulations

  5. Minimizing the background radiation in the new neutron time-of-flight facility at CERN FLUKA Monte Carlo simulations for the optimization of the n_TOF second experimental line

    CERN Document Server

    Bergström, Ida; Elfgren, Erik

    2013-06-11

    At the particle physics laboratory CERN in Geneva, Switzerland, the Neutron Time-of-Flight facility has recently started the construction of a second experimental line. The new neutron beam line will unavoidably induce radiation in both the experimental area and in nearby accessible areas. Computer simulations for the minimization of the background were carried out using the FLUKA Monte Carlo simulation package. The background radiation in the new experimental area needs to be kept to a minimum during measurements. This was studied with focus on the contributions from backscattering in the beam dump. The beam dump was originally designed for shielding the outside area using a block of iron covered in concrete. However, the backscattering was never studied in detail. In this thesis, the fluences (i.e. the flux integrated over time) of neutrons and photons were studied in the experimental area while the beam dump design was modified. An optimized design was obtained by stopping the fast neutrons in a high Z mat...

  6. The ZOOM minimization package

    International Nuclear Information System (INIS)

    Fischler, Mark S.; Sachs, D.

    2004-01-01

    A new object-oriented Minimization package is available for distribution in the same manner as CLHEP. This package, designed for use in HEP applications, has all the capabilities of Minuit, but is a re-write from scratch, adhering to modern C++ design principles. A primary goal of this package is extensibility in several directions, so that its capabilities can be kept fresh with as little maintenance effort as possible. This package is distinguished by the priority that was assigned to C++ design issues, and the focus on producing an extensible system that will resist becoming obsolete

  7. Minimizing the Pacman effect

    International Nuclear Information System (INIS)

    Ritson, D.; Chou, W.

    1997-10-01

    The Pacman bunches will experience two deleterious effects: tune shift and orbit displacement. It is known that the tune shift can be compensated by arranging crossing planes 900 relative to each other at successive interaction points (lPs). This paper gives an analytical estimate of the Pacman orbit displacement for a single as well as for two crossings. For the latter, it can be minimized by using equal phase advances from one IP to another. In the LHC, this displacement is in any event small and can be neglected

  8. Minimally Invasive Parathyroidectomy

    Directory of Open Access Journals (Sweden)

    Lee F. Starker

    2011-01-01

    Full Text Available Minimally invasive parathyroidectomy (MIP is an operative approach for the treatment of primary hyperparathyroidism (pHPT. Currently, routine use of improved preoperative localization studies, cervical block anesthesia in the conscious patient, and intraoperative parathyroid hormone analyses aid in guiding surgical therapy. MIP requires less surgical dissection causing decreased trauma to tissues, can be performed safely in the ambulatory setting, and is at least as effective as standard cervical exploration. This paper reviews advances in preoperative localization, anesthetic techniques, and intraoperative management of patients undergoing MIP for the treatment of pHPT.

  9. The deeper structure of the southern Dead Sea basin derived from neural network analysis of velocity and attenuation tomography

    Science.gov (United States)

    Braeuer, Benjamin; Haberland, Christian; Bauer, Klaus; Weber, Michael

    2014-05-01

    The Dead Sea basin is a pull-apart basin at the Dead Sea transform fault, the boundary between the African and the Arabian plates. Though the DSB has been studied for a long time, the available knowledge - based mainly on surface geology, drilling and seismic reflection surveys - gives only a partial picture of its shallow structure. Therefore, within the framework of the international DESIRE (DEad Sea Integrated REsearch) project, a dense temporary local seismological network was operated in the southern Dead Sea area. Within 18 month of recording 650 events were detected. In addition to an already published tomography study revealing the distribution of P velocities and the Vp/Vs ratios a 2D P-wave attenuation tomography (parameter Qp) was performed. The neural network technique of Self-organizing maps (SOM) is used for the joint interpretation of these three parameters (Vp, Vp/Vs, Qp). The resulting clusters in the petrophysical parameter space are assigned to the main lithological units below the southern part of the Dead Sea basin: (1) The basin sediments characterized by strong attenuation, high vp/vs ratios and low P velocities. (2) The pre-basin sediments characterized by medium to strong attenuation, low Vp/Vs ratios and medium P velocities. (3) The basement characterized by low to moderate attenuation, medium vp/vs ratios and high P velocities. Thus, the asymmetric southern Dead Sea basin is filled with basin sediments down to depth of 7 to 12 km. Below the basin sediments, the pre-basin sediments are extending to a depth between 13 and 18 km.

  10. Comparative structural analysis of human DEAD-box RNA helicases.

    Directory of Open Access Journals (Sweden)

    Patrick Schütz

    2010-09-01

    Full Text Available DEAD-box RNA helicases play various, often critical, roles in all processes where RNAs are involved. Members of this family of proteins are linked to human disease, including cancer and viral infections. DEAD-box proteins contain two conserved domains that both contribute to RNA and ATP binding. Despite recent advances the molecular details of how these enzymes convert chemical energy into RNA remodeling is unknown. We present crystal structures of the isolated DEAD-domains of human DDX2A/eIF4A1, DDX2B/eIF4A2, DDX5, DDX10/DBP4, DDX18/myc-regulated DEAD-box protein, DDX20, DDX47, DDX52/ROK1, and DDX53/CAGE, and of the helicase domains of DDX25 and DDX41. Together with prior knowledge this enables a family-wide comparative structural analysis. We propose a general mechanism for opening of the RNA binding site. This analysis also provides insights into the diversity of DExD/H- proteins, with implications for understanding the functions of individual family members.

  11. Assessment of biofuel potential of dead neem leaves ( Azadirachta ...

    African Journals Online (AJOL)

    Unfortunately, the lack of information on the biomass and energy potentials of these wastes empedes any initiative for its industrial biomethanization. This study was investigated with the aim of evaluating the biofuel potentials of dead neem leaves in Maroua town. The number of neem trees, as well as biomass produced by ...

  12. Remembering Important People On The Day Of The Dead

    Science.gov (United States)

    Curriculum Review, 2005

    2005-01-01

    This article describes a project that can help students learn more about historic figures-or remember lost loved ones--with this Day of the Dead project from Frida Kahlo and Diego Rivera. The purpose is to remember the wonderful things the person did, and to celebrate his or her life. Directions for construction, as well as a suggested list of…

  13. Necrotizing fasciitis: A deadly disease | Cree | East and Central ...

    African Journals Online (AJOL)

    Background: Knowledge of the diagnosis, cause, course and required treatment of this deadly disease among physicians and surgeons around the world is limited. Methods: A study was undertaken at University Teaching Hospital (UTH), Lusaka Zambia to review the incidence, associated pathology, management given, ...

  14. Comparison of burning characteristics of live and dead chaparral fuels

    Science.gov (United States)

    L. Sun; X. Zhou; S. Mahalingam; D.R. Weise

    2006-01-01

    Wildfire spread in living vegetation, such as chaparral in southern California, often causes significant damage to infrastructure and ecosystems. The effects of physical characteristics of fuels and fuel beds on live fuel burning and whether live fuels differ fundamentally from dead woody fuels in their burning characteristics are not well understood. Toward this end,...

  15. Comparative structural analysis of human DEAD-box RNA helicases.

    Science.gov (United States)

    Schütz, Patrick; Karlberg, Tobias; van den Berg, Susanne; Collins, Ruairi; Lehtiö, Lari; Högbom, Martin; Holmberg-Schiavone, Lovisa; Tempel, Wolfram; Park, Hee-Won; Hammarström, Martin; Moche, Martin; Thorsell, Ann-Gerd; Schüler, Herwig

    2010-09-30

    DEAD-box RNA helicases play various, often critical, roles in all processes where RNAs are involved. Members of this family of proteins are linked to human disease, including cancer and viral infections. DEAD-box proteins contain two conserved domains that both contribute to RNA and ATP binding. Despite recent advances the molecular details of how these enzymes convert chemical energy into RNA remodeling is unknown. We present crystal structures of the isolated DEAD-domains of human DDX2A/eIF4A1, DDX2B/eIF4A2, DDX5, DDX10/DBP4, DDX18/myc-regulated DEAD-box protein, DDX20, DDX47, DDX52/ROK1, and DDX53/CAGE, and of the helicase domains of DDX25 and DDX41. Together with prior knowledge this enables a family-wide comparative structural analysis. We propose a general mechanism for opening of the RNA binding site. This analysis also provides insights into the diversity of DExD/H- proteins, with implications for understanding the functions of individual family members.

  16. Dead Metaphor in Selected Advertisements in Nigerian Dailies ...

    African Journals Online (AJOL)

    Dead metaphors and images are often enlivened and empowered by advertisers to help their commUlzication and to achieve bewitching effects. It is interesting to see words and phrases that may be presumed to have been drained of their linguistic strength being brought back to currency and made to act fast in aiding ...

  17. Theory of precipitation effects on dead cylindrical fuels

    Science.gov (United States)

    Michael A. Fosberg

    1972-01-01

    Numerical and analytical solutions of the Fickian diffusion equation were used to determine the effects of precipitation on dead cylindrical forest fuels. The analytical solution provided a physical framework. The numerical solutions were then used to refine the analytical solution through a similarity argument. The theoretical solutions predicted realistic rates of...

  18. Skeletal muscle mitochondrial respiration in AMPKa2 kinase dead mice

    DEFF Research Database (Denmark)

    Larsen, Steen; Kristensen, Jonas Møller; Stride, Nis

    2012-01-01

    AIM: To study if the phenotypical characteristics (exercise intolerance; reduced spontaneous activity) of the AMPKa2 kinase-dead (KD) mice can be explained by a reduced mitochondrial respiratory flux rates (JO(2) ) in skeletal muscle. Secondly, the effect of the maturation process on JO(2...

  19. Dead space variability of face masks for valved holding chambers.

    Science.gov (United States)

    Amirav, Israel; Newhouse, Michael T

    2008-03-01

    Valved holding chambers with masks are commonly used to deliver inhaled medications to young children with asthma. Optimal mask properties such as their dead space volume have received little attention. The smaller the mask the more likely it is that a greater proportion of the dose in the VHC will be inhaled with each breath, thus speeding VHC emptying and improving overall aerosol delivery efficiency and dose. Masks may have different DSV and thus different performance. To compare both physical dead space and functional dead space of different face masks under various applied pressures. The DSV of three commonly used face masks of VHCs was measured by water displacement both under various pressures (to simulate real-life application, dynamic DSV) and under no pressure (static DSV). There was a great variability of both static and dynamic dead space among various face mask for VHCs, which is probably related to their flexibility. Different masks have different DSV characteristics. This variability should be taken into account when comparing the clinical efficacy of various VHCs.

  20. Syntactic Reconstruction and Reanalysis, Semantic Dead Ends, and Prefrontal Cortex

    DEFF Research Database (Denmark)

    Christensen, Ken Ramshøj

    2010-01-01

    have been to Paris than […] to Oslo), using pseudo-elliptical structures (‘dead ends’) as control (More people have been to Paris than I have). (ii) Reanalysis in the face of structural ambiguity in syntactic ‘garden paths’, where the parser initially assigns an incorrect structure and is forced...

  1. Dead wood in European beech (Fagus sylvatica) forest reserves

    NARCIS (Netherlands)

    Christensen, M.; Hahn, K.; Mountford, E.P.; Ódor, P.; Standovár, T.; Rozenbergar, D.; Diaci, J.; Wijdeven, S.M.J.; Meyer, P.; Winter, S.; Vrska, T.

    2005-01-01

    Data were analysed on the volume of dead wood in 86 beech forest reserves, covering most of the range of European beech forests. The mean volume was 130 m3/ha and the variation among reserves was high, ranging from almost nil to 550 m3/ha. The volume depended significantly on forest type, age since

  2. Stylistic Variation In Three English Translations Of The Dead Sea ...

    African Journals Online (AJOL)

    Since the discovery of the Dead Sea Scrolls in 1947 different English translations were published. In this article the stylistic variation of three of these translations are analysed. It is suggested that the issue of stylistic variation boils down to linguistically inscribed preference in the choice and construction of discourses in the ...

  3. Cowboys and zombies: destabilizing patriarchal discourse in The Walking Dead

    NARCIS (Netherlands)

    Hassler-Forest, D.

    2012-01-01

    The serialized comic book The Walking Dead, written by Robert Kirkman and drawn by Charlie Adlard, has been published by Image Comics from October 2003, and is still being released in monthly instalments as of this writing. It has won numerous awards, including the prestigious Eisner Award for Best

  4. Gastric necrosis four years after fundoplication causing a dead foetus

    DEFF Research Database (Denmark)

    Thinggaard, Ebbe; Skovsen, Anders Peter; Kildsig, Jeppe

    2014-01-01

    A 31-year-old pregnant woman was admitted and treated for diabetic ketoacidosis. As the patient deteriorated and the viability of the foetus was uncertain a CT scan was done which showed free fluid and air intraabdominally. Surgery was performed. A dead foetus was delivered and a 2 × 5 cm necrotic...

  5. Burying the dead, creating the past

    NARCIS (Netherlands)

    Runia, E.H.

    2007-01-01

    Professional historians tend to be ambivalent about one of the prime historical phenomena of our time: the desire to commemorate. The amount of attention given to memory (collective or not) and trauma bears witness to the fact that historians really do want to give in to that desire; the fact that

  6. High performance liquid chromatography column efficiency enhancement by zero dead volume recycling and practical approach using park and recycle arrangement.

    Science.gov (United States)

    Minarik, Marek; Franc, Martin; Minarik, Milan

    2018-06-15

    A new instrumental approach to recycling HPLC is described. The concept is based on fast reintroduction of incremental peak sections back onto the separation column. The re-circulation is performed within a closed loop containing only the column and two synchronized switching valves. By having HPLC pump out of the cycle, the method minimizes peak broadening due to dead volume. As a result the efficiency is dramatically increased allowing for the most demanding analytical applications. In addition, a parking loop is employed for temporary storage of analytes from the middle section of the separated mixture prior to their recycling. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Time-Delay System Identification Using Genetic Algorithm

    DEFF Research Database (Denmark)

    Yang, Zhenyu; Seested, Glen Thane

    2013-01-01

    Due to the unknown dead-time coefficient, the time-delay system identification turns to be a non-convex optimization problem. This paper investigates the identification of a simple time-delay system, named First-Order-Plus-Dead-Time (FOPDT), by using the Genetic Algorithm (GA) technique. The qual......Due to the unknown dead-time coefficient, the time-delay system identification turns to be a non-convex optimization problem. This paper investigates the identification of a simple time-delay system, named First-Order-Plus-Dead-Time (FOPDT), by using the Genetic Algorithm (GA) technique...

  8. Minimal conformal model

    Energy Technology Data Exchange (ETDEWEB)

    Helmboldt, Alexander; Humbert, Pascal; Lindner, Manfred; Smirnov, Juri [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)

    2016-07-01

    The gauge hierarchy problem is one of the crucial drawbacks of the standard model of particle physics (SM) and thus has triggered model building over the last decades. Its most famous solution is the introduction of low-scale supersymmetry. However, without any significant signs of supersymmetric particles at the LHC to date, it makes sense to devise alternative mechanisms to remedy the hierarchy problem. One such mechanism is based on classically scale-invariant extensions of the SM, in which both the electroweak symmetry and the (anomalous) scale symmetry are broken radiatively via the Coleman-Weinberg mechanism. Apart from giving an introduction to classically scale-invariant models, the talk presents our results on obtaining a theoretically consistent minimal extension of the SM, which reproduces the correct low-scale phenomenology.

  9. Minimal Reducts with Grasp

    Directory of Open Access Journals (Sweden)

    Iris Iddaly Mendez Gurrola

    2011-03-01

    Full Text Available The proper detection of patient level of dementia is important to offer the suitable treatment. The diagnosis is based on certain criteria, reflected in the clinical examinations. From these examinations emerge the limitations and the degree in which each patient is in. In order to reduce the total of limitations to be evaluated, we used the rough set theory, this theory has been applied in areas of the artificial intelligence such as decision analysis, expert systems, knowledge discovery, classification with multiple attributes. In our case this theory is applied to find the minimal limitations set or reduct that generate the same classification that considering all the limitations, to fulfill this purpose we development an algorithm GRASP (Greedy Randomized Adaptive Search Procedure.

  10. Minimally extended SILH

    International Nuclear Information System (INIS)

    Chala, Mikael; Grojean, Christophe; Humboldt-Univ. Berlin; Lima, Leonardo de; Univ. Estadual Paulista, Sao Paulo

    2017-03-01

    Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.

  11. Minimally extended SILH

    Energy Technology Data Exchange (ETDEWEB)

    Chala, Mikael [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Valencia Univ. (Spain). Dept. de Fisica Teorica y IFIC; Durieux, Gauthier; Matsedonskyi, Oleksii [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Grojean, Christophe [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Humboldt-Univ. Berlin (Germany). Inst. fuer Physik; Lima, Leonardo de [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Univ. Estadual Paulista, Sao Paulo (Brazil). Inst. de Fisica Teorica

    2017-03-15

    Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.

  12. Cell tracking using iron oxide fails to distinguish dead from living transplanted cells in the infarcted heart.

    Science.gov (United States)

    Winter, E M; Hogers, B; van der Graaf, L M; Gittenberger-de Groot, A C; Poelmann, R E; van der Weerd, L

    2010-03-01

    Recently, debate has arisen about the usefulness of cell tracking using iron oxide-labeled cells. Two important issues in determining the usefulness of cell tracking with MRI are generally overlooked; first, the effect of graft rejection in immunocompetent models, and second, the necessity for careful histological confirmation of the fate of the labeled cells in the presence of iron oxide. Therefore, both iron oxide-labeled living as well as dead epicardium-derived cells (EPDCs) were investigated in ischemic myocardium of immunodeficient non-obese diabetic (NOD)/acid: non-obese diabetic severe combined immunodeficient (NOD/scid) mice with 9.4T MRI until 6 weeks after surgery, at which time immunohistochemical analysis was performed. In both groups, voids on MRI scans were observed that did not change in number, size, or localization over time. Based on MRI, no distinction could be made between living and dead injected cells. Prussian blue staining confirmed that the hypointense spots on MRI corresponded to iron-loaded cells. However, in the dead-EPDC recipients, all iron-positive cells appeared to be macrophages, while the living-EPDC recipients also contained engrafted iron-loaded EPDCs. Iron labeling is inadequate for determining the fate of transplanted cells in the immunodeficient host, since dead cells produce an MRI signal indistinguishable from incorporated living cells. (c) 2010 Wiley-Liss, Inc.

  13. The active structure of the Dead Sea depression

    Science.gov (United States)

    Shamir, G.

    2003-04-01

    The ~220km long gravitational and structural Dead Sea Depression (DSD), situated along the southern section of the Dead Sea Transform (DST), is centered by the Dead Sea basin sensu strictu (DSB), which has been described since the 1960?s as a pull-apart basin over a presumed left-hand fault step. However, several observations, or their lack thereof, question this scheme, e.g. (i) It is not supported by recent seismological and geomorphic data; (ii) It does not explain the fault pattern and mixed sinistral and dextral offset along the DSB western boundary; (iii) It does not simply explain the presence of intense deformation outside the presumed fault step zone; (iv) It is inconsistent with the orientation of seismically active faults within the Dead Sea and Jericho Valley; (v); It is apparently inconsistent with the symmetrical structure of the DSD; (vi) The length of the DSB exceeds the total offset along the Dead Sea Transform, while its subsidence is about the age of the DST. Integration of newly acquired and analyzed data (high resolution and petroleum seismic reflection data, earthquake relocation and fault plane solutions) with previously published data (structural mapping, fracture orientation distribution, Bouguer anomaly maps, sinkhole distribution, geomorphic lineaments) now shows that the active upper crustal manifestation of the DSD is a broad shear zone dominated by internal fault systems oriented NNE and NNW. These fault systems are identified by earthquake activity, seismic reflection observations, alignment of recent sinkholes, and distribution of Bouguer anomaly gradients. Motion on the NNE system is normal-dextral, suggesting that counterclockwise rotation may have taken place within the shear zone. The overall sinistral motion between the Arabian and Israel-Sinai plates along the DSD is thus accommodated by distributed shear across the N-S extending DSD. The three-dimensionality of this motion at the DSD may be related to the rate of convergence

  14. Blackfolds, plane waves and minimal surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Armas, Jay [Physique Théorique et Mathématique, Université Libre de Bruxelles and International Solvay Institutes, ULB-Campus Plaine CP231, B-1050 Brussels (Belgium); Albert Einstein Center for Fundamental Physics, University of Bern,Sidlerstrasse 5, 3012 Bern (Switzerland); Blau, Matthias [Albert Einstein Center for Fundamental Physics, University of Bern,Sidlerstrasse 5, 3012 Bern (Switzerland)

    2015-07-29

    Minimal surfaces in Euclidean space provide examples of possible non-compact horizon geometries and topologies in asymptotically flat space-time. On the other hand, the existence of limiting surfaces in the space-time provides a simple mechanism for making these configurations compact. Limiting surfaces appear naturally in a given space-time by making minimal surfaces rotate but they are also inherent to plane wave or de Sitter space-times in which case minimal surfaces can be static and compact. We use the blackfold approach in order to scan for possible black hole horizon geometries and topologies in asymptotically flat, plane wave and de Sitter space-times. In the process we uncover several new configurations, such as black helicoids and catenoids, some of which have an asymptotically flat counterpart. In particular, we find that the ultraspinning regime of singly-spinning Myers-Perry black holes, described in terms of the simplest minimal surface (the plane), can be obtained as a limit of a black helicoid, suggesting that these two families of black holes are connected. We also show that minimal surfaces embedded in spheres rather than Euclidean space can be used to construct static compact horizons in asymptotically de Sitter space-times.

  15. Blackfolds, plane waves and minimal surfaces

    Science.gov (United States)

    Armas, Jay; Blau, Matthias

    2015-07-01

    Minimal surfaces in Euclidean space provide examples of possible non-compact horizon geometries and topologies in asymptotically flat space-time. On the other hand, the existence of limiting surfaces in the space-time provides a simple mechanism for making these configurations compact. Limiting surfaces appear naturally in a given space-time by making minimal surfaces rotate but they are also inherent to plane wave or de Sitter space-times in which case minimal surfaces can be static and compact. We use the blackfold approach in order to scan for possible black hole horizon geometries and topologies in asymptotically flat, plane wave and de Sitter space-times. In the process we uncover several new configurations, such as black helicoids and catenoids, some of which have an asymptotically flat counterpart. In particular, we find that the ultraspinning regime of singly-spinning Myers-Perry black holes, described in terms of the simplest minimal surface (the plane), can be obtained as a limit of a black helicoid, suggesting that these two families of black holes are connected. We also show that minimal surfaces embedded in spheres rather than Euclidean space can be used to construct static compact horizons in asymptotically de Sitter space-times.

  16. Observations on the spatio-temporal patterns of radon along the western fault of the Dead Sea Transform, NW Dead Sea

    International Nuclear Information System (INIS)

    Steinitz, G.; Piatibratova, O.; Malik, U.

    2015-01-01

    An extensive radon anomaly is developed along the western boundary fault of the Dead Sea Transform in the NW sector of the Dead Sea, extending 15-20 km north-south. The highest radon values occur in proximity to the fault scarp. Radon is measured, in gravel (depth 1.5-3 m) at sites located at a) on-fault positions, 1-30 meters east of the fault scarp, and b) off-fault positions located 600-800 the east. Prominent signals occur in the annual and daily periodicity bands, as well as non-periodic multi-day variations (2-20 days). Modulations occur among the annual variation and the multi-day and the daily signals, and between the multi-day and the daily signal. Dissimilar variation patterns occur at on-fault versus off-fault sites in the time domain, and in the relative amplitude of the daily periodicities. Variation patterns and their modulations are similar to those encountered in experimental simulations. It is concluded that: 1) above surface atmospheric influences can be excluded; 2) a remote above surface influence probably drives the periodic components in the annual and diurnal bands; 3) diurnal as well as the multi-day signals are modified and inter-modulated by near field geological (static) and geophysical (dynamic) influences. Systematically different influences are operating at on-fault versus off- fault positions, so far the natures of these near field influences are unidentified. (authors)

  17. The copper cabel is not dead

    Energy Technology Data Exchange (ETDEWEB)

    Knutsen, Jan Grove

    2010-07-01

    It has for a long time been used old and proprietary communication solutions from vessel to subsea solutions, and between subsea installations. This has changed the last few years since standard industrial products are good enough to be in subsea installations. The biggest change is that Ethernet is used in most applications, and all Ethernet communication is based on open standards(IEEE). Since this now is standard equipment it means cost reduction and easy available support and products. This paper will describe how today's products make it possible to use old copper cables with today's technology. With this new technology it is possible to build advanced networks where you can benefit from redundancy functions and monitoring of your network. (Author)

  18. Smoking Out a Deadly Threat: Tobacco Use in the LGBT Community

    Science.gov (United States)

    ... Out a Deadly Threat: Tobacco Use in the LGBT Community Disparities in Lung Health Series "Smoking Out a Deadly Threat: Tobacco Use in the LGBT Community" is part of the American Lung Association's ...

  19. Theories of minimalism in architecture: Post scriptum

    Directory of Open Access Journals (Sweden)

    Stevanović Vladimir

    2012-01-01

    Full Text Available Owing to the period of intensive development in the last decade of XX century, architectural phenomenon called Minimalism in Architecture was remembered as the Style of the Nineties, which is characterized, morphologically speaking, by simplicity and formal reduction. Simultaneously with its development in practice, on a theoretical level several dominant interpretative models were able to establish themselves. The new millennium and time distance bring new problems; therefore this paper represents a discussion on specific theorization related to Minimalism in Architecture that can bear the designation of post scriptum, because their development starts after the constitutional period of architectural minimalist discourse. In XXI century theories, the problem of definition of minimalism remains important topic, approached by theorists through resolving on the axis: Modernism - Minimal Art - Postmodernism - Minimalism in Architecture. With regard to this, analyzed texts can be categorized in two groups: 1 texts of affirmative nature and historical-associative approach in which minimalism is identified with anything that is simple and reduced, in an idealizing manner, relied mostly on the existing hypotheses; 2 critically oriented texts, in which authors reconsider adequacy of the very term 'minimalism' in the context of architecture and take a metacritical attitude towards previous texts.

  20. Minimal Marking: A Success Story

    Science.gov (United States)

    McNeilly, Anne

    2014-01-01

    The minimal-marking project conducted in Ryerson's School of Journalism throughout 2012 and early 2013 resulted in significantly higher grammar scores in two first-year classes of minimally marked university students when compared to two traditionally marked classes. The "minimal-marking" concept (Haswell, 1983), which requires…

  1. Occurrence of organohalogens at the Dead Sea Basin

    Science.gov (United States)

    Tubbesing, Christoph; Kotte, Karsten; Keppler, Frank; Krause, Torsten; Bahlmann, Enno; Schöler, Heinfried

    2013-04-01

    Most arid and semi-arid regions are characterized by evaporites, which are assured sources for volatile organohalogens (VOX) [1]. These compounds play an important role in tropospheric and stratospheric chemistry. The Dead Sea between Israel and Jordan is the world's most famous and biggest all-season water covered salt lake. In both countries chemical plants like the Dead Sea Works and the Arab Potash Company are located at the southern part of the Dead Sea and mine various elements such as bromine and magnesium. Conveying sea water through constructed evaporation pans multifarious salts are enriched and precipitated. In contrast, the Northern basin and main part of the Dead Sea has remained almost untouched by industrial salt production. Its fresh water supply from the Jordan River is constantly decreasing, leading to further increased salinity. During a HALOPROC campaign (Natural Halogenation Processes in the Environment) we collected various samples including air, soils, sediments, halophytic plants, ground- and seawater from the Northern and Southern basin of the Israeli side of the Dead Sea. These samples were investigated for the occurrence of halocarbons using different analytical techniques. Most samples were analyzed for volatile organohalogens such as haloalkanes using gas chromatography- mass spectrometry (GC-MS). Interestingly, there is a strong enrichment of trihalomethanes (THM), especially all chlorinated and brominated ones and also the iodinated compound dichloroiodomethane were found in the Southern basin. In addition, volatile organic carbons (VOC) such as ethene and some other alkenes were analyzed by a gas chromatography-flame ionisation detector (GC-FID) to obtain further information about potential precursors of halogenated compounds. Halophytic plants were investigated for their potential to release chloromethane and bromomethane but also for their stable carbon and hydrogen isotope composition. For this purpose, a plant chamber was

  2. Swarm robotics and minimalism

    Science.gov (United States)

    Sharkey, Amanda J. C.

    2007-09-01

    Swarm Robotics (SR) is closely related to Swarm Intelligence, and both were initially inspired by studies of social insects. Their guiding principles are based on their biological inspiration and take the form of an emphasis on decentralized local control and communication. Earlier studies went a step further in emphasizing the use of simple reactive robots that only communicate indirectly through the environment. More recently SR studies have moved beyond these constraints to explore the use of non-reactive robots that communicate directly, and that can learn and represent their environment. There is no clear agreement in the literature about how far such extensions of the original principles could go. Should there be any limitations on the individual abilities of the robots used in SR studies? Should knowledge of the capabilities of social insects lead to constraints on the capabilities of individual robots in SR studies? There is a lack of explicit discussion of such questions, and researchers have adopted a variety of constraints for a variety of reasons. A simple taxonomy of swarm robotics is presented here with the aim of addressing and clarifying these questions. The taxonomy distinguishes subareas of SR based on the emphases and justifications for minimalism and individual simplicity.

  3. Minimal dilaton model

    Directory of Open Access Journals (Sweden)

    Oda Kin-ya

    2013-05-01

    Full Text Available Both the ATLAS and CMS experiments at the LHC have reported the observation of the particle of mass around 125 GeV which is consistent to the Standard Model (SM Higgs boson, but with an excess of events beyond the SM expectation in the diphoton decay channel at each of them. There still remains room for a logical possibility that we are not seeing the SM Higgs but something else. Here we introduce the minimal dilaton model in which the LHC signals are explained by an extra singlet scalar of the mass around 125 GeV that slightly mixes with the SM Higgs heavier than 600 GeV. When this scalar has a vacuum expectation value well beyond the electroweak scale, it can be identified as a linearly realized version of a dilaton field. Though the current experimental constraints from the Higgs search disfavors such a region, the singlet scalar model itself still provides a viable alternative to the SM Higgs in interpreting its search results.

  4. Minimal mirror twin Higgs

    Energy Technology Data Exchange (ETDEWEB)

    Barbieri, Riccardo [Institute of Theoretical Studies, ETH Zurich,CH-8092 Zurich (Switzerland); Scuola Normale Superiore,Piazza dei Cavalieri 7, 56126 Pisa (Italy); Hall, Lawrence J.; Harigaya, Keisuke [Department of Physics, University of California,Berkeley, California 94720 (United States); Theoretical Physics Group, Lawrence Berkeley National Laboratory,Berkeley, California 94720 (United States)

    2016-11-29

    In a Mirror Twin World with a maximally symmetric Higgs sector the little hierarchy of the Standard Model can be significantly mitigated, perhaps displacing the cutoff scale above the LHC reach. We show that consistency with observations requires that the Z{sub 2} parity exchanging the Standard Model with its mirror be broken in the Yukawa couplings. A minimal such effective field theory, with this sole Z{sub 2} breaking, can generate the Z{sub 2} breaking in the Higgs sector necessary for the Twin Higgs mechanism. The theory has constrained and correlated signals in Higgs decays, direct Dark Matter Detection and Dark Radiation, all within reach of foreseen experiments, over a region of parameter space where the fine-tuning for the electroweak scale is 10-50%. For dark matter, both mirror neutrons and a variety of self-interacting mirror atoms are considered. Neutrino mass signals and the effects of a possible additional Z{sub 2} breaking from the vacuum expectation values of B−L breaking fields are also discussed.

  5. Offshore investments-cui prodis? Schrodinger's cat in offshore financing: Both alive and dead

    Directory of Open Access Journals (Sweden)

    Stepuk Anna

    2014-01-01

    Full Text Available Trends of FDI in offshore tax havens were compared to efforts and efficiency of regulatory authorities to prevent money laundering. Based on available data it was stated that current position in offshore FDI stays alive and officially dead at the same time, keeping the balance of interests for the main stakeholders: corporations, authorities and financial institutes support further offshore investments. Analysis based on volumes of trade and financial transactions between offshore centers, developed and developing countries. As a result withdrawal of financial resources from the developing countries degrades social capital funding and supports corruption growth.

  6. Data Assimilation of Dead Fuel Moisture Observations from Remote automated Weather Stations

    Czech Academy of Sciences Publication Activity Database

    Vejmelka, Martin; Kochanski, A.; Mandel, Jan

    2016-01-01

    Roč. 25, č. 5 (2016), s. 558-568 ISSN 1049-8001 R&D Projects: GA ČR GA13-34856S Grant - others:National Science Foundation(US) AGS-0835579 and DMS-1216481; NASA (US) NNX12AQ85G and NNX13AH9G. Institutional support: RVO:67985807 Keywords : data assimilation * dead fuel moisture * equilibrium * Kalman filter * remote automated weather stations * time lag model * trend surface model Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.748, year: 2016

  7. Microclimate and habitat heterogeneity as the major drivers of beetle diversity in dead wood

    Science.gov (United States)

    Sebastian Seibold; Claus Bassler; Roland Brandl; Boris Buche; Alexander Szallies; Simon Thorn; Michael D. Ulyshen; Jorg Muller; Christopher Baraloto

    2016-01-01

    1. Resource availability and habitat heterogeneity are principle drivers of biodiversity, but their individual roles often remain unclear since both factors are usually correlated. The biodiversity of species dependent on dead wood could be driven by either resource availability represented by dead-wood amount or habitat heterogeneity characterized by dead-wood...

  8. Proportional Derivative Control with Inverse Dead-Zone for Pendulum Systems

    Directory of Open Access Journals (Sweden)

    José de Jesús Rubio

    2013-01-01

    Full Text Available A proportional derivative controller with inverse dead-zone is proposed for the control of pendulum systems. The proposed method has the characteristic that the inverse dead-zone is cancelled with the pendulum dead-zone. Asymptotic stability of the proposed technique is guaranteed by the Lyapunov analysis. Simulations of two pendulum systems show the effectiveness of the proposed technique.

  9. Predation by northern squawfish on live and dead juvenile chinook salmon

    International Nuclear Information System (INIS)

    Gadomski, D.M.; Hall-Griswold, J.A.

    1992-01-01

    Northern squawfish Ptychocheilus oregonensis is a major predator of juvenile salmonids Oncorhynchus spp. migrating downstream through the Columbia River. High predation rates occur just below dams. If northern squawfish selectively consume salmonids killed or injured during dam passage, previous estimates of predation mortality may be too high. We conducted laboratory experiments that indicate northern squawfish prefer dead juvenile chinook salmon O. tshawytscha over live individuals. When equal numbers of dead and live chinook salmon were offered to northern squawfish maintained on a natural photoperiod (15 h light: 9 h darkness), significantly more (P < 0.05) dead than live fish were consumed, both in 1,400-L circular tanks and in an 11,300-L raceway (62% and 79% of prey consumed were dead, respectively). When dead and live juvenile chinook salmon were provided in proportions more similar to those below dams (20% dead, 80% live), northern squawfish still selected for dead prey (36% of fish consumed were dead). In additional experiments, northern squawfish were offered a proportion of 20% dead juvenile chinook salmon during 4-h periods of either light or darkness. The predators were much more selective for dead chinook salmon during bright light (88% of fish consumed were dead) than during darkness (31% were dead)

  10. Dead space and slope indices from the expiratory carbon dioxide tension-volume curve

    NARCIS (Netherlands)

    A.H. Kars (Alice); J.M. Bogaard (Jan); Th. Stijnen (Theo); J. de Vries; A.F.M. Verbraak (Anton); C. Hilvering

    1997-01-01

    textabstractThe slope of phase 3 and three noninvasively determined dead space estimates derived from the expiratory carbon dioxide tension (PCO2) versus volume curve, including the Bohr dead space (VD,Bohr), the Fowler dead space (VD,Fowler) and pre-interface expirate

  11. Minimally invasive aortic valve replacement

    DEFF Research Database (Denmark)

    Foghsgaard, Signe; Schmidt, Thomas Andersen; Kjaergard, Henrik K

    2009-01-01

    In this descriptive prospective study, we evaluate the outcomes of surgery in 98 patients who were scheduled to undergo minimally invasive aortic valve replacement. These patients were compared with a group of 50 patients who underwent scheduled aortic valve replacement through a full sternotomy...... operations were completed as mini-sternotomies, 4 died later of noncardiac causes. The aortic cross-clamp and perfusion times were significantly different across all groups (P replacement...... is an excellent operation in selected patients, but its true advantages over conventional aortic valve replacement (other than a smaller scar) await evaluation by means of randomized clinical trial. The "extended mini-aortic valve replacement" operation, on the other hand, is a risky procedure that should...

  12. Awakening the "Walking Dead": Zombie Pedagogy for Millennials

    Directory of Open Access Journals (Sweden)

    Nancy Dawn Wadsworth

    2017-02-01

    Full Text Available This article lays out the pedagogical benefits of using popular zombie productions, particularly AMC's The Walking Dead, to teach a critical introduction to modern political theory. Based on my undergraduate course: "Political Theory, Climate Change, and the Zombie Apocalypse," the article outlines how The Walking Dead can be used to critique the mythic assumptions built into modern social contract theory; to introduce other political ideologies, including conservatism, anarchism, fascism, and communism; and to consider the political challenges raised by a global problem such as climate change in an increasingly neoliberal environment. Zombie productions are offered as a particularly salient pedagogical tool that can help awaken critical political analysis for the Millennial Generation.

  13. Not to declare dead someone still alive: Case reports

    Directory of Open Access Journals (Sweden)

    Anđelić Slađana

    2015-01-01

    Full Text Available Introduction. Diagnosing death represents an activity that carries a great deal of public responsibility for medical professionals and is continually exposed to the control of citizens and media. Although this is a taboo subject in medical circles, unfortunately in medical practice there are situations when the physician issues a death diagnosis form without even examining the person or for an already buried person. Such physician’s action is impermissible and it leads to the possibility of professional and criminal law punishment. Case Outline. By giving examples from practice, we wish to point out the need for exceptional caution when confirming and diagnosing death in order to diagnose the true, i.e. rule out apparent death and consequently avoid the mistake of declaring dead someone still alive. Conclusion. When confirming and declaring death, exceptional caution of the physician is necessary so as not to declare dead someone still alive!

  14. The Dead Mother, the Uncanny, and the Holy Ghost

    Directory of Open Access Journals (Sweden)

    Gal Ventura

    2015-01-01

    Full Text Available Recurrent portrayals of dead mothers frequently appeared in French art from 1800 to 1850. This essay focuses on one of the latest manifestations of this image, namely, the French Realist Jules Breton's (1827-1906 painting The Hunger of 1850, in order to examine the psycho-historical elements associated with the mother's death. Through an analysis of the "Uncanny" as formulated by both Ernst Jentsch and Sigmund Freud, we will address the undissolvable link between the structuralization of "homeliness" in the late eighteenth century and the dread it evoked in the early nineteenth century, as two sides of the same coin. We will simultaneously consider the inherent conflictuality embodied by the dead mother according to the French psychoanalyst André Green, who dealt with the experience of "nothingness" that characterizes children of mothers-who-refuse-to-die.

  15. Thermodynamics of the dead zone inner edge in protoplanetary disks

    International Nuclear Information System (INIS)

    Faure, Julien

    2014-01-01

    The dead zone, a quiescent region enclosed in the turbulent flow of a protoplanetary disk, seems to be a promising site for planet formation. Indeed, the development of a density maximum at the dead zone inner edge, that has the property to trap the infalling dust, is a natural outcome of the accretion mismatch at this interface. Moreover, the flow here may be unstable and organize itself into vortical structures that efficiently collect dust grains. The inner edge location is however loosely constrained. In particular, it depends on the thermodynamical prescriptions of the disk model that is considered. It has been recently proposed that the inner edge is not static and that the variations of young stars accretion luminosity are the signature of this interface displacements. This thesis address the question of the impact of the gas thermodynamics onto its dynamics around the dead zone inner edge. MHD simulations including the complex interplay between thermodynamical processes and the dynamics confirmed the dynamical behaviour of the inner edge. A first measure of the interface velocity has been realised. This result has been compared to the predictions of a mean field model. It revealed the crucial role of the energy transport by density waves excited at the interface. These simulations also exhibit a new intriguing phenomenon: vortices forming at the interface follow a cycle of formation-migration-destruction. This vortex cycle may compromise the formation of planetesimals at the inner edge. This thesis claims that thermodynamical processes are at the heart of how the region around the dead zone inner edge in protoplanetary disks works. (author) [fr

  16. Strong tracking adaptive Kalman filters for underwater vehicle dead reckoning

    Institute of Scientific and Technical Information of China (English)

    XIAO Kun; FANG Shao-ji; PANG Yong-jie

    2007-01-01

    To improve underwater vehicle dead reckoning, a developed strong tracking adaptive kalman filter is proposed. The filter is improved with an additional adaptive factor and an estimator of measurement noise covariance. Since the magnitude of fading factor is changed adaptively, the tracking ability of the filter is still enhanced in low velocity condition of underwater vehicles. The results of simulation tests prove the presented filter effective.

  17. Growth responses of mature loblolly pine to dead wood.manipulations.

    Energy Technology Data Exchange (ETDEWEB)

    Ulyshen, Michael D.; Horn, Scott; Hanula, James L.

    2012-04-01

    Large-scale manipulations of dead wood in mature Pinus taeda L. stands in the southeastern United States included a major one-time input of logs (fivefold increase in log volume) created by felling trees onsite, annual removals of all dead wood above >10 cm in diameter and >60 cm in length, and a reference in which no manipulations took place. We returned over a decade later to determine how these treatments affected tree growth using increment cores. There were no significant differences in tree density, basal area or tree diameters among treatments at the time of sampling. Although tree growth was consistently higher in the log-input plots and lower in the removal plots, this was true even during the 5 year period before the experiment began. When growth data from this initial period were included in the model as a covariate, no differences in post-treatment tree growth were detected. It is possible that treatment effects will become apparent after more time has passed, however.

  18. Minimalism in Art, Medical Science and Neurosurgery.

    Science.gov (United States)

    Okten, Ali Ihsan

    2018-01-01

    The word "minimalism" is a word derived from French the word "minimum". Whereas the lexical meaning of minimum is "the least or the smallest quantity necessary for something", its expression in mathematics can be described as "the lowest step a variable number can descend, least, minimal". Minimalism, which advocates an extreme simplicity of the artistic form, is a current in modern art and music whose origins go to 1960s and which features simplicity and objectivity. Although art, science and philosophy are different disciplines, they support each other from time to time, sometimes they intertwine and sometimes they copy each other. A periodic schools or teaching in one of them can take the others into itself, so, they proceed on their ways empowering each other. It is also true for the minimalism in art and the minimal invasive surgical approaches in science. Concepts like doing with less, avoiding unnecessary materials and reducing the number of the elements in order to increase the effect in the expression which are the main elements of the minimalism in art found their equivalents in medicine and neurosurgery. Their equivalents in medicine or neurosurgery have been to protect the physical integrity of the patient with less iatrogenic injury, minimum damage and the same therapeutic effect in the most effective way and to enable the patient to regain his health in the shortest span of time. As an anticipation, we can consider that the minimal approaches started by Richard Wollheim and Barbara Rose in art and Lars Leksell, Gazi Yaşargil and other neurosurgeons in neurosurgery in the 1960s are the present day equivalents of the minimalist approaches perhaps unconsciously started by Kazimir Malevich in art and Victor Darwin L"Espinasse in neurosurgery in the early 1900s. We can also consider that they have developed interacting with each other, not by chance.

  19. Image denoising by a direct variational minimization

    Directory of Open Access Journals (Sweden)

    Pilipović Stevan

    2011-01-01

    Full Text Available Abstract In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.

  20. On minimizers of causal variational principles

    International Nuclear Information System (INIS)

    Schiefeneder, Daniela

    2011-01-01

    Causal variational principles are a class of nonlinear minimization problems which arise in a formulation of relativistic quantum theory referred to as the fermionic projector approach. This thesis is devoted to a numerical and analytic study of the minimizers of a general class of causal variational principles. We begin with a numerical investigation of variational principles for the fermionic projector in discrete space-time. It is shown that for sufficiently many space-time points, the minimizing fermionic projector induces non-trivial causal relations on the space-time points. We then generalize the setting by introducing a class of causal variational principles for measures on a compact manifold. In our main result we prove under general assumptions that the support of a minimizing measure is either completely timelike, or it is singular in the sense that its interior is empty. In the examples of the circle, the sphere and certain flag manifolds, the general results are supplemented by a more detailed analysis of the minimizers. (orig.)

  1. The Active Structure of the Greater Dead Sea Basin

    Science.gov (United States)

    Shamir, G.

    2002-12-01

    The Greater Dead Sea Basin (GDSB) is a 220km long depression situated along the southern section of the Dead Sea Transform (DST), between two structurally and gravitationally elevated points, Wadi Malih in the north and Paran fault zone in the south. In its center is the Dead Sea basin 'sensu strictu' (DSB), which has been described since the 1970s as a pull-apart basin at a left step-over along the DST. However, several observations, or their lack thereof, contradict this scheme, e.g. (i) It is not supported by recent seismological and geomorphic data; (ii) It does not explain the fault pattern and mixed sinistral and dextral offset along the DSB western boundary; (iii) It does not simply explain the presence of intense deformation outside the presumed fault step zone; (iv) It is inconsistent with the orientation of seismically active faults within the Dead Sea and Jericho Valley; (v) The length of the DSB exceeds the total offset along the Dead Sea Transform, while its subsidence is about the age of the DST. In this study, newly acquired and analyzed data (high resolution seismic reflection and earthquake relocation and fault plane solutions) has been integrated with previously published data (structural mapping, fracture orientation distribution, Bouguer anomaly maps, sinkhole distribution, geomorphic lineaments). The results show that the GDSB is dominated by two active fault systems, one trending NNE and showing normal-dextral motion, the other trending NW. These systems are identified by earthquake activity, seismic reflection observations, alignment of recent sinkholes, and distribution of Bouguer anomaly gradients. As a result, the intra-basin structure is of a series of rectangular blocks. The dextral slip component along NNE trending faults, the mixed sense of lateral offset along the western boundary of the DSB and temporal change in fracture orientation in the Jericho Valley suggest that the intra-basin blocks have rotated counterclockwise since the

  2. Hadamard and minimal renormalizations

    International Nuclear Information System (INIS)

    Castagnino, M.A.; Gunzig, E.; Nardone, P.; Paz, J.P.

    1986-01-01

    A common language is introduced to study two, well-known, different methods for the renormalization of the energy-momentum tensor of a scalar neutral quantum field in curved space-time. Different features of the two renormalizations are established and compared

  3. Comparative growth and development of spiders reared on live and dead prey.

    Science.gov (United States)

    Peng, Yu; Zhang, Fan; Gui, Shaolan; Qiao, Huping; Hose, Grant C

    2013-01-01

    Scavenging (feeding on dead prey) has been demonstrated across a number of spider families, yet the implications of feeding on dead prey for the growth and development of individuals and population is unknown. In this study we compare the growth, development, and predatory activity of two species of spiders that were fed on live and dead prey. Pardosa astrigera (Lycosidae) and Hylyphantes graminicola (Lyniphiidae) were fed live or dead fruit flies, Drosophila melanogaster. The survival of P. astrigera and H. graminicola was not affected by prey type. The duration of late instars of P. astrigera fed dead prey were longer and mature spiders had less protein content than those fed live prey, whereas there were no differences in the rate of H. graminicola development, but the mass of mature spiders fed dead prey was greater than those fed live prey. Predation rates by P. astrigera did not differ between the two prey types, but H. graminicola had a higher rate of predation on dead than alive prey, presumably because the dead flies were easier to catch and handle. Overall, the growth, development and reproduction of H. graminicola reared with dead flies was better than those reared on live flies, yet for the larger P. astrigera, dead prey may suit smaller instars but mature spiders may be best maintained with live prey. We have clearly demonstrated that dead prey may be suitable for rearing spiders, although the success of the spiders fed such prey appears size- and species specific.

  4. Waste minimization in analytical methods

    International Nuclear Information System (INIS)

    Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S. Schilling, J.B.

    1995-01-01

    The US Department of Energy (DOE) will require a large number of waste characterizations over a multi-year period to accomplish the Department's goals in environmental restoration and waste management. Estimates vary, but two million analyses annually are expected. The waste generated by the analytical procedures used for characterizations is a significant source of new DOE waste. Success in reducing the volume of secondary waste and the costs of handling this waste would significantly decrease the overall cost of this DOE program. Selection of appropriate analytical methods depends on the intended use of the resultant data. It is not always necessary to use a high-powered analytical method, typically at higher cost, to obtain data needed to make decisions about waste management. Indeed, for samples taken from some heterogeneous systems, the meaning of high accuracy becomes clouded if the data generated are intended to measure a property of this system. Among the factors to be considered in selecting the analytical method are the lower limit of detection, accuracy, turnaround time, cost, reproducibility (precision), interferences, and simplicity. Occasionally, there must be tradeoffs among these factors to achieve the multiple goals of a characterization program. The purpose of the work described here is to add waste minimization to the list of characteristics to be considered. In this paper the authors present results of modifying analytical methods for waste characterization to reduce both the cost of analysis and volume of secondary wastes. Although tradeoffs may be required to minimize waste while still generating data of acceptable quality for the decision-making process, they have data demonstrating that wastes can be reduced in some cases without sacrificing accuracy or precision

  5. Touch-down reverse transcriptase-PCR detection of IgV(H) rearrangement and Sybr-Green-based real-time RT-PCR quantitation of minimal residual disease in patients with chronic lymphocytic leukemia

    Czech Academy of Sciences Publication Activity Database

    Peková, Soňa; Marková, J.; Pajer, Petr; Dvořák, Michal; Cetkovský, P.; Schwarz, J.

    2005-01-01

    Roč. 9, č. 1 (2005), s. 23-34 ISSN 1084-8592 Institutional research plan: CEZ:AV0Z50520514 Keywords : minimal residual disease * chronic lymphocytic leukaemia * IgV (H) rearrangement Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 2.562, year: 2003

  6. The TEL-AML1 real-time quantitative polymerase chain reaction (PCR) might replace the antigen receptor-based genomic PCR in clinical minimal residual disease studies in children with acute lymphoblastic leukaemia

    NARCIS (Netherlands)

    de Haas, V.; Breunis, W. B.; dee, R.; Verhagen, O. J. H. M.; Kroes, W.; van Wering, E. R.; van Dongen, J. J. M.; van den Berg, H.; van der Schoot, C. E.

    2002-01-01

    Prospective studies in children with B-precursor acute lymphoblastic leukaemia (ALL) have shown that polymerase chain reaction (PCR)-based detection of minimal residual disease (MRD) using immunoglobin (Ig) and T-cell receptor (TCR) gene rearrangements as targets can be used to identify patients

  7. Touch-down reverse transcriptase-PCR detection of IgV(H) rearrangement and Sybr-Green-based real-time RT-PCR quantitation of minimal residual disease in patients with chronic lymphocytic leukemia.

    Science.gov (United States)

    Peková, Sona; Marková, Jana; Pajer, Petr; Dvorák, Michal; Cetkovský, Petr; Schwarz, Jirí

    2005-01-01

    Patients with chronic lymphocytic leukemia (CLL) can relapse even after aggressive therapy and autografts. It is commonly assumed that to prevent relapse the level of minimal residual disease (MRD) should be as low as possible. To evaluate MRD, highly sensitive quantitative assays are needed. The aim of the study was to develop a robust and sensitive method for detection of the clonal immunoglobulin heavy-chain variable (IgV(H)) rearrangement in CLL and to introduce a highly sensitive and specific methodology for MRD monitoring in patients with CLL who undergo intensive treatment. As a prerequisite for MRD detection, touch-down reverse transcriptase (RT)-PCR using degenerate primers were used for the diagnostic identification of (H) gene rearrangement(s). For quantitative MRD detection in 18 patients, we employed a real-time RT-PCR assay (RQ-PCR) making use of patient-specific primers and the cost-saving Sybr-Green reporter dye (SG). For precise calibration of RQ-PCR, patient-specific IgV(H) sequences were cloned. Touch-down RT-PCR with degenerate primers allowed the successful detection of IgV(H) clonal rearrangement(s) in 252 of 257 (98.1%) diagnostic samples. Biallelic rearrangements were found in 27 of 252 (10.7%) cases. Degenerate primers used for the identification of clonal expansion at diagnosis were not sensitive enough for MRD detection. In contrast, our RQ-PCR assay using patient-specific primers and SG reached the sensitivity of 10(-)(6). We demonstrated MRD in each patient tested, including four of four patients in complete remission following autologous hematopoietic stem cell transplantation (HSCT) and three of three following allogeneic 'mini'-HSCT. Increments in MRD might herald relapse; aggressive chemotherapy could induce molecular remission. Our touch-down RT-PCR has higher efficiency to detect clonal IgV(H) rearrangements including the biallelic ones. MRD quantitation of IgV(H) expression using SG-based RQ-PCR represents a highly specific

  8. Flow simulation in piping system dead legs using second moment, closure and k-epsilon model

    International Nuclear Information System (INIS)

    Deutsch, E.; Mechitoua, N.; Mattei, J.D.

    1996-01-01

    This paper deals with an industrial application of second moment closure turbulence model in in numerical simulation of 3D turbulent flows in piping system dead legs. Calculations performed with the 3D ESTET code are presented which contrast the performance of k-epsilon eddy viscosity model and second moment closure turbulence models. Coarse (100 000), medium (400 000) and fine (1 500 000) meshes were used. The second moment closure performs significantly better than eddy viscosity model and predicts with a good agreement the vortex penetration in dead legs provided to use sufficiently refined meshes. The results point out the necessity to be able to perform calculations using fine mesh before introducing refined physical models such as second moment closure turbulence model in a numerical code. This study illustrates the ability of second moment closure turbulence model to simulate 3D turbulent industrial flows. Reynolds stress model computation does not require special care, the calculation is carried on as simply as the k-ξ one. The CPU time needed is less that twice the CPU time needed using k-ξ model. (authors)

  9. Global Analysis of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J

    2010-01-01

    Many properties of minimal surfaces are of a global nature, and this is already true for the results treated in the first two volumes of the treatise. Part I of the present book can be viewed as an extension of these results. For instance, the first two chapters deal with existence, regularity and uniqueness theorems for minimal surfaces with partially free boundaries. Here one of the main features is the possibility of 'edge-crawling' along free parts of the boundary. The third chapter deals with a priori estimates for minimal surfaces in higher dimensions and for minimizers of singular integ

  10. Minimal Surfaces for Hitchin Representations

    DEFF Research Database (Denmark)

    Li, Qiongling; Dai, Song

    2018-01-01

    . In this paper, we investigate the properties of immersed minimal surfaces inside symmetric space associated to a subloci of Hitchin component: $q_n$ and $q_{n-1}$ case. First, we show that the pullback metric of the minimal surface dominates a constant multiple of the hyperbolic metric in the same conformal...... class and has a strong rigidity property. Secondly, we show that the immersed minimal surface is never tangential to any flat inside the symmetric space. As a direct corollary, the pullback metric of the minimal surface is always strictly negatively curved. In the end, we find a fully decoupled system...

  11. Cyclone Simulation via Action Minimization

    Science.gov (United States)

    Plotkin, D. A.; Weare, J.; Abbot, D. S.

    2016-12-01

    A postulated impact of climate change is an increase in intensity of tropical cyclones (TCs). This hypothesized effect results from the fact that TCs are powered subsaturated boundary layer air picking up water vapor from the surface ocean as it flows inwards towards the eye. This water vapor serves as the energy input for TCs, which can be idealized as heat engines. The inflowing air has a nearly identical temperature as the surface ocean; therefore, warming of the surface leads to a warmer atmospheric boundary layer. By the Clausius-Clapeyron relationship, warmer boundary layer air can hold more water vapor and thus results in more energetic storms. Changes in TC intensity are difficult to predict due to the presence of fine structures (e.g. convective structures and rainbands) with length scales of less than 1 km, while general circulation models (GCMs) generally have horizontal resolutions of tens of kilometers. The models are therefore unable to capture these features, which are critical to accurately simulating cyclone structure and intensity. Further, strong TCs are rare events, meaning that long multi-decadal simulations are necessary to generate meaningful statistics about intense TC activity. This adds to the computational expense, making it yet more difficult to generate accurate statistics about long-term changes in TC intensity due to global warming via direct simulation. We take an alternative approach, applying action minimization techniques developed in molecular dynamics to the WRF weather/climate model. We construct artificial model trajectories that lead from quiescent (TC-free) states to TC states, then minimize the deviation of these trajectories from true model dynamics. We can thus create Monte Carlo model ensembles that are biased towards cyclogenesis, which reduces computational expense by limiting time spent in non-TC states. This allows for: 1) selective interrogation of model states with TCs; 2) finding the likeliest paths for

  12. Minimalism through intraoperative functional mapping.

    Science.gov (United States)

    Berger, M S

    1996-01-01

    Intraoperative stimulation mapping may be used to avoid unnecessary risk to functional regions subserving language and sensori-motor pathways. Based on the data presented here, language localization is variable in the entire population, with only certainty existing for the inferior frontal region responsible for motor speech. Anatomical landmarks such as the anterior temporal tip for temporal lobe language sites and the posterior aspect of the lateral sphenoid wing for the frontal lobe language zones are unreliable in avoiding postoperative aphasias. Thus, individual mapping to identify essential language sites has the greatest likelihood of avoiding permanent deficits in naming, reading, and motor speech. In a similar approach, motor and sensory pathways from the cortex and underlying white matter may be reliably stimulated and mapped in both awake and asleep patients. Although these techniques require an additional operative time and equipment nominally priced, the result is often gratifying, as postoperative morbidity has been greatly reduced in the process of incorporating these surgical strategies. The patients quality of life is improved in terms of seizure control, with or without antiepileptic drugs. This avoids having to perform a second costly operative procedure, which is routinely done when extraoperative stimulation and recording is done via subdural grids. In addition, an aggressive tumor resection at the initial operation lengthens the time to tumor recurrence and often obviates the need for a subsequent reoperation. Thus, intraoperative functional mapping may be best alluded to as a surgical technique that results in "minimalism in the long term".

  13. Guidelines for mixed waste minimization

    International Nuclear Information System (INIS)

    Owens, C.

    1992-02-01

    Currently, there is no commercial mixed waste disposal available in the United States. Storage and treatment for commercial mixed waste is limited. Host States and compacts region officials are encouraging their mixed waste generators to minimize their mixed wastes because of management limitations. This document provides a guide to mixed waste minimization

  14. Minimal changes in health status questionnaires: distinction between minimally detectable change and minimally important change

    Directory of Open Access Journals (Sweden)

    Knol Dirk L

    2006-08-01

    Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.

  15. The Quest for Minimal Quotients for Probabilistic Automata

    DEFF Research Database (Denmark)

    Eisentraut, Christian; Hermanns, Holger; Schuster, Johann

    2013-01-01

    One of the prevailing ideas in applied concurrency theory and verification is the concept of automata minimization with respect to strong or weak bisimilarity. The minimal automata can be seen as canonical representations of the behaviour modulo the bisimilarity considered. Together with congruence...... results wrt. process algebraic operators, this can be exploited to alleviate the notorious state space explosion problem. In this paper, we aim at identifying minimal automata and canonical representations for concurrent probabilistic models. We present minimality and canonicity results for probabilistic...... automata wrt. strong and weak bisimilarity, together with polynomial time minimization algorithms....

  16. Algorithm for finding minimal cut sets in a fault tree

    International Nuclear Information System (INIS)

    Rosenberg, Ladislav

    1996-01-01

    This paper presents several algorithms that have been used in a computer code for fault-tree analysing by the minimal cut sets method. The main algorithm is the more efficient version of the new CARA algorithm, which finds minimal cut sets with an auxiliary dynamical structure. The presented algorithm for finding the minimal cut sets enables one to do so by defined requirements - according to the order of minimal cut sets, or to the number of minimal cut sets, or both. This algorithm is from three to six times faster when compared with the primary version of the CARA algorithm

  17. Waste minimization handbook, Volume 1

    International Nuclear Information System (INIS)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility's life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996

  18. Waste minimization handbook, Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility`s life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996.

  19. Minimal Webs in Riemannian Manifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen

    2008-01-01

    For a given combinatorial graph $G$ a {\\it geometrization} $(G, g)$ of the graph is obtained by considering each edge of the graph as a $1-$dimensional manifold with an associated metric $g$. In this paper we are concerned with {\\it minimal isometric immersions} of geometrized graphs $(G, g......)$ into Riemannian manifolds $(N^{n}, h)$. Such immersions we call {\\em{minimal webs}}. They admit a natural 'geometric' extension of the intrinsic combinatorial discrete Laplacian. The geometric Laplacian on minimal webs enjoys standard properties such as the maximum principle and the divergence theorems, which...... are of instrumental importance for the applications. We apply these properties to show that minimal webs in ambient Riemannian spaces share several analytic and geometric properties with their smooth (minimal submanifold) counterparts in such spaces. In particular we use appropriate versions of the divergence...

  20. Minimal string theories and integrable hierarchies

    Science.gov (United States)

    Iyer, Ramakrishnan

    -perturbative definition for the first time. Notably, we discover that the Painleve IV equation plays a key role in organizing the string theory physics, joining its siblings, Painleve I and II, whose roles have previously been identified in this minimal string context. We then present evidence that the conjectured type II theories have smooth non-perturbative solutions, connecting two perturbative asymptotic regimes, in a 't Hooft limit. Our technique also demonstrates evidence for new minimal string theories that are not apparent in a perturbative analysis.