WorldWideScience

Sample records for maximum lead time

  1. Multiperiod Maximum Loss is time unit invariant.

    Science.gov (United States)

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.

  2. Lead Time Study,

    Science.gov (United States)

    1982-05-01

    1979, the number of titanium fabrications dropped from 16 to 4, primarily because of the sponge shortage and EPA and OSHA requirements. Non-military...East - Taiwan, Korea, Singapore, Malaysia and Hong Kong. In addition, a significant amount of ceramic parts, lead frames and high technology

  3. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  4. Pressurizer /Auxiliary Spray Piping Stress Analysis For Determination Of Lead Shielding Maximum Allow Able Load

    International Nuclear Information System (INIS)

    Setjo, Renaningsih

    2000-01-01

    Piping stress analysis for PZR/Auxiliary Spray Lines Nuclear Power Plant AV Unit I(PWR Type) has been carried out. The purpose of this analysis is to establish a maximum allowable load that is permitted at the time of need by placing lead shielding on the piping system on class 1 pipe, Pressurizer/Auxiliary Spray Lines (PZR/Aux.) Reactor Coolant Loop 1 and 4 for NPP AV Unit one in the mode 5 and 6 during outage. This analysis is intended to reduce the maximum amount of radiation dose for the operator during ISI ( In service Inspection) period.The result shown that the maximum allowable loads for 4 inches lines for PZR/Auxiliary Spray Lines is 123 lbs/feet

  5. Extending the maximum operation time of the MNSR reactor.

    Science.gov (United States)

    Dawahra, S; Khattab, K; Saba, G

    2016-09-01

    An effective modification to extend the maximum operation time of the Miniature Neutron Source Reactor (MNSR) to enhance the utilization of the reactor has been tested using the MCNP4C code. This modification consisted of inserting manually in each of the reactor inner irradiation tube a chain of three polyethylene-connected containers filled of water. The total height of the chain was 11.5cm. The replacement of the actual cadmium absorber with B(10) absorber was needed as well. The rest of the core structure materials and dimensions remained unchanged. A 3-D neutronic model with the new modifications was developed to compare the neutronic parameters of the old and modified cores. The results of the old and modified core excess reactivities (ρex) were: 3.954, 6.241 mk respectively. The maximum reactor operation times were: 428, 1025min and the safety reactivity factors were: 1.654 and 1.595 respectively. Therefore, a 139% increase in the maximum reactor operation time was noticed for the modified core. This increase enhanced the utilization of the MNSR reactor to conduct a long time irradiation of the unknown samples using the NAA technique and increase the amount of radioisotope production in the reactor. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Maximum time-dependent space-charge limited diode currents

    Energy Technology Data Exchange (ETDEWEB)

    Griswold, M. E. [Tri Alpha Energy, Inc., Rancho Santa Margarita, California 92688 (United States); Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States)

    2016-01-15

    Recent papers claim that a one dimensional (1D) diode with a time-varying voltage drop can transmit current densities that exceed the Child-Langmuir (CL) limit on average, apparently contradicting a previous conjecture that there is a hard limit on the average current density across any 1D diode, as t → ∞, that is equal to the CL limit. However, these claims rest on a different definition of the CL limit, namely, a comparison between the time-averaged diode current and the adiabatic average of the expression for the stationary CL limit. If the current were considered as a function of the maximum applied voltage, rather than the average applied voltage, then the original conjecture would not have been refuted.

  7. Linear Time Local Approximation Algorithm for Maximum Stable Marriage

    Directory of Open Access Journals (Sweden)

    Zoltán Király

    2013-08-01

    Full Text Available We consider a two-sided market under incomplete preference lists with ties, where the goal is to find a maximum size stable matching. The problem is APX-hard, and a 3/2-approximation was given by McDermid [1]. This algorithm has a non-linear running time, and, more importantly needs global knowledge of all preference lists. We present a very natural, economically reasonable, local, linear time algorithm with the same ratio, using some ideas of Paluch [2]. In this algorithm every person make decisions using only their own list, and some information asked from members of these lists (as in the case of the famous algorithm of Gale and Shapley. Some consequences to the Hospitals/Residents problem are also discussed.

  8. Load Dependent Lead Times and Sustainability

    DEFF Research Database (Denmark)

    Pahl, Julia; Voss, Stefan

    2016-01-01

    to prevent decreased quality or waste of production parts and products. This gains importance because waiting times imply longer lead times charging the production system with work in process inventories. Longer lead times can lead to quality losses due to depreciation, so that parts need to be reworked...... if possible or discarded. But return flows of products for rework or remanufacturing actions significantly complicate the production planning process. We analyze sustainability options with respect to lead time management by formulating a comprehensive mathematical model. We consider a deterministic, mixed...

  9. Production Planning with Load Dependent Lead Times

    DEFF Research Database (Denmark)

    Pahl, Julia

    2005-01-01

    Lead times impact the performance of the supply chain significantly. Although there is a large literature concerning queuing models for the analysis of the relationship between capacity utilization and lead times, and there is a substantial literature concerning control and order release policies...... that take lead times into consideration, there have been only few papers describing models at the aggregate planning level that recognize the relationship between the planned utilization of capacity and lead times. In this paper we provide an in-depth discussion of the state-of-the art in this literature......, with particular attention to those models that are appropriate at the aggregate planning level....

  10. [Estimation of maximum acceptable concentration of lead and cadmium in plants and their medicinal preparations].

    Science.gov (United States)

    Zitkevicius, Virgilijus; Savickiene, Nijole; Abdrachmanovas, Olegas; Ryselis, Stanislovas; Masteiková, Rūta; Chalupova, Zuzana; Dagilyte, Audrone; Baranauskas, Algirdas

    2003-01-01

    Heavy metals (lead, cadmium) are possible dashes which quantity is defined by the limiting acceptable contents. Different drugs preparations: infusions, decoctions, tinctures, extracts, etc. are produced using medicinal plants. The objective of this research was to study the impurities of heavy metals (lead, cadmium) in medicinal plants and some drug preparations. We investigated liquid extracts of fruits Crataegus monogyna Jacq. and herbs of Echinacea purpurea Moench., tinctures--of herbs Leonurus cardiaca L. The raw materials were imported from Poland. Investigations were carried out in cooperation with the Laboratory of Antropogenic Factors of the Institute for Biomedical Research. Amounts of lead and cadmium were established after "dry" mineralisation using "Perkin-Elmer Zeeman/3030" model electrothermic atomic absorption spectrophotometer (ETG AAS/Zeeman). It was established that lead is absorbed most efficiently after estimation of absorption capacity of cellular fibers. About 10.73% of lead crosses tinctures and extracts, better cadmium--49.63%. Herbs of Leonurus cardiaca L. are the best in holding back lead and cadmium. About 14.5% of lead and cadmium crosses the tincture of herbs Leonurus cardiaca L. We estimated the factors of heavy metals (lead, cadmium) in the liquid extracts of Crataegus monogyna Jacq. and Echinacea purpurea Moench., tincture of Leonurus cardiaca L. after investigations of heavy metals (lead, cadmium) in drugs and preparations of it. The amounts of heavy metals (lead, cadmium) don't exceed the allowable norms in fruits of Crataegus monogyna Jacq., herbs of Leonurus cardiaca L. and Echinacea purpurea Moench. after estimation of lead and cadmium extraction factors, the maximum of acceptable daily intake and the quantity of drugs consumption in day.

  11. On the maximum-entropy/autoregressive modeling of time series

    Science.gov (United States)

    Chao, B. F.

    1984-01-01

    The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.

  12. Photovoltaic High-Frequency Pulse Charger for Lead-Acid Battery under Maximum Power Point Tracking

    Directory of Open Access Journals (Sweden)

    Hung-I. Hsieh

    2013-01-01

    Full Text Available A photovoltaic pulse charger (PV-PC using high-frequency pulse train for charging lead-acid battery (LAB is proposed not only to explore the charging behavior with maximum power point tracking (MPPT but also to delay sulfating crystallization on the electrode pores of the LAB to prolong the battery life, which is achieved due to a brief pulse break between adjacent pulses that refreshes the discharging of LAB. Maximum energy transfer between the PV module and a boost current converter (BCC is modeled to maximize the charging energy for LAB under different solar insolation. A duty control, guided by a power-increment-aided incremental-conductance MPPT (PI-INC MPPT, is implemented to the BCC that operates at maximum power point (MPP against the random insolation. A 250 W PV-PC system for charging a four-in-series LAB (48 Vdc is examined. The charging behavior of the PV-PC system in comparison with that of CC-CV charger is studied. Four scenarios of charging statuses of PV-BC system under different solar insolation changes are investigated and compared with that using INC MPPT.

  13. Superior Reproducibility of the Leading to Leading Edge and Inner to Inner Edge Methods in the Ultrasound Assessment of Maximum Abdominal Aortic Diameter

    DEFF Research Database (Denmark)

    Borgbjerg, Jens; Bøgsted, Martin; Lindholt, Jes S

    2018-01-01

    Objectives: Controversy exists regarding optimal caliper placement in ultrasound assessment of maximum abdominal aortic diameter. This study aimed primarily to determine reproducibility of caliper placement in relation to the aortic wall with the three principal methods: leading to leading edge...

  14. 50 CFR 259.34 - Minimum and maximum deposits; maximum time to deposit.

    Science.gov (United States)

    2010-10-01

    ... B objective. A time longer than 10 years, either by original scheduling or by subsequent extension... OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE AID TO FISHERIES CAPITAL CONSTRUCTION FUND...) Minimum annual deposit. The minimum annual (based on each party's taxable year) deposit required by the...

  15. Maritime Load Dependent Lead Times - An Analysis

    DEFF Research Database (Denmark)

    Pahl, Julia; Voss, Stefan

    2017-01-01

    in production. Inspired by supply chain planning systems, we analyze the current state of (collaborative) planning in the maritime transport chain with focus on containers. Regarding the problem of congestion, we particularly emphasize on load dependent lead times (LDLT) which are well studied in production....

  16. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  17. The Time, Space and Matter of Leading

    DEFF Research Database (Denmark)

    Jørgensen, Kenneth Mølbjerg

    2018-01-01

    This paper develops an ethical framework of leadership learning from Hannah Arendt’s writing. The intention is to identify important principles of a framework of leadership leading that help empower actors to lead themselves and to engage, interact, influence and inspire others through...

  18. Trading Time with Space - Development of subduction zone parameter database for a maximum magnitude correlation assessment

    Science.gov (United States)

    Schaefer, Andreas; Wenzel, Friedemann

    2017-04-01

    Subduction zones are generally the sources of the earthquakes with the highest magnitudes. Not only in Japan or Chile, but also in Pakistan, the Solomon Islands or for the Lesser Antilles, subduction zones pose a significant hazard for the people. To understand the behavior of subduction zones, especially to identify their capabilities to produce maximum magnitude earthquakes, various physical models have been developed leading to a large number of various datasets, e.g. from geodesy, geomagnetics, structural geology, etc. There have been various studies to utilize this data for the compilation of a subduction zone parameters database, but mostly concentrating on only the major zones. Here, we compile the largest dataset of subduction zone parameters both in parameter diversity but also in the number of considered subduction zones. In total, more than 70 individual sources have been assessed and the aforementioned parametric data have been combined with seismological data and many more sources have been compiled leading to more than 60 individual parameters. Not all parameters have been resolved for each zone, since the data completeness depends on the data availability and quality for each source. In addition, the 3D down-dip geometry of a majority of the subduction zones has been resolved using historical earthquake hypocenter data and centroid moment tensors where available and additionally compared and verified with results from previous studies. With such a database, a statistical study has been undertaken to identify not only correlations between those parameters to estimate a parametric driven way to identify potentials for maximum possible magnitudes, but also to identify similarities between the sources themselves. This identification of similarities leads to a classification system for subduction zones. Here, it could be expected if two sources share enough common characteristics, other characteristics of interest may be similar as well. This concept

  19. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    Science.gov (United States)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  20. Statistics of the first passage time of Brownian motion conditioned by maximum value or area

    International Nuclear Information System (INIS)

    Kearney, Michael J; Majumdar, Satya N

    2014-01-01

    We derive the moments of the first passage time for Brownian motion conditioned by either the maximum value or the area swept out by the motion. These quantities are the natural counterparts to the moments of the maximum value and area of Brownian excursions of fixed duration, which we also derive for completeness within the same mathematical framework. Various applications are indicated. (paper)

  1. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm f...... for this problem has O(n^2) running time. We show how to obtain O(n^{3/2}*(log n)^3) expected running time. We also show that if G has bounded treewidth, its maximum detour can be computed in O(n*(log n)^3) expected time....

  2. 49 CFR 398.6 - Hours of service of drivers; maximum driving time.

    Science.gov (United States)

    2010-10-01

    ... REGULATIONS TRANSPORTATION OF MIGRANT WORKERS § 398.6 Hours of service of drivers; maximum driving time. No person shall drive nor shall any motor carrier permit or require a driver employed or used by it to drive...

  3. Local Times of Galactic Cosmic Ray Intensity Maximum and Minimum in the Diurnal Variation

    Directory of Open Access Journals (Sweden)

    Su Yeon Oh

    2006-06-01

    Full Text Available The Diurnal variation of galactic cosmic ray (GCR flux intensity observed by the ground Neutron Monitor (NM shows a sinusoidal pattern with the amplitude of 1sim 2 % of daily mean. We carried out a statistical study on tendencies of the local times of GCR intensity daily maximum and minimum. To test the influences of the solar activity and the location (cut-off rigidity on the distribution in the local times of maximum and minimum GCR intensity, we have examined the data of 1996 (solar minimum and 2000 (solar maximum at the low-latitude Haleakala (latitude: 20.72 N, cut-off rigidity: 12.91 GeV and the high-latitude Oulu (latitude: 65.05 N, cut-off rigidity: 0.81 GeV NM stations. The most frequent local times of the GCR intensity daily maximum and minimum come later about 2sim3 hours in the solar activity maximum year 2000 than in the solar activity minimum year 1996. Oulu NM station whose cut-off rigidity is smaller has the most frequent local times of the GCR intensity maximum and minimum later by 2sim3 hours from those of Haleakala station. This feature is more evident at the solar maximum. The phase of the daily variation in GCR is dependent upon the interplanetary magnetic field varying with the solar activity and the cut-off rigidity varying with the geographic latitude.

  4. A maximum principle for time dependent transport in systems with voids

    International Nuclear Information System (INIS)

    Schofield, S.L.; Ackroyd, R.T.

    1996-01-01

    A maximum principle is developed for the first-order time dependent Boltzmann equation. The maximum principle is a generalization of Schofield's κ(θ) principle for the first-order steady state Boltzmann equation, and provides a treatment of time dependent transport in systems with void regions. The formulation comprises a direct least-squares minimization allied with a suitable choice of bilinear functional, and gives rise to a maximum principle whose functional is free of terms that have previously led to difficulties in treating void regions. (Author)

  5. Administrative Lead Time at Navy Inventory Control Points

    National Research Council Canada - National Science Library

    Granetto, Paul

    1994-01-01

    .... We also evaluated the internal controls established for administrative lead time and the adequacy of management's implementation of the DoD Internal Management Control Program for monitoring administrative lead time...

  6. Modeling stochastic lead times in multi-echelon systems

    NARCIS (Netherlands)

    Diks, E.B.; Heijden, van der M.C.

    1996-01-01

    In many multi-echelon inventory systems the lead times are random variables. A common and reasonable assumption in most models is that replenishment orders do not cross, which implies that successive lead times are correlated. However, the process which generates such lead times is usually not

  7. Modeling stochastic lead times in multi-echelon systems

    NARCIS (Netherlands)

    Diks, E.B.; van der Heijden, M.C.

    1997-01-01

    In many multi-echelon inventory systems, the lead times are random variables. A common and reasonable assumption in most models is that replenishment orders do not cross, which implies that successive lead times are correlated. However, the process that generates such lead times is usually not well

  8. Stochastic behavior of a cold standby system with maximum repair time

    Directory of Open Access Journals (Sweden)

    Ashish Kumar

    2015-09-01

    Full Text Available The main aim of the present paper is to analyze the stochastic behavior of a cold standby system with concept of preventive maintenance, priority and maximum repair time. For this purpose, a stochastic model is developed in which initially one unit is operative and other is kept as cold standby. There is a single server who visits the system immediately as and when required. The server takes the unit under preventive maintenance after a maximum operation time at normal mode if one standby unit is available for operation. If the repair of the failed unit is not possible up to a maximum repair time, failed unit is replaced by new one. The failure time, maximum operation time and maximum repair time distributions of the unit are considered as exponentially distributed while repair and maintenance time distributions are considered as arbitrary. All random variables are statistically independent and repairs are perfect. Various measures of system effectiveness are obtained by using the technique of semi-Markov process and RPT. To highlight the importance of the study numerical results are also obtained for MTSF, availability and profit function.

  9. Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems

    Directory of Open Access Journals (Sweden)

    Hakan A. Çırpan

    2002-05-01

    Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.

  10. A polynomial time algorithm for solving the maximum flow problem in directed networks

    International Nuclear Information System (INIS)

    Tlas, M.

    2015-01-01

    An efficient polynomial time algorithm for solving maximum flow problems has been proposed in this paper. The algorithm is basically based on the binary representation of capacities; it solves the maximum flow problem as a sequence of O(m) shortest path problems on residual networks with nodes and m arcs. It runs in O(m"2r) time, where is the smallest integer greater than or equal to log B , and B is the largest arc capacity of the network. A numerical example has been illustrated using this proposed algorithm.(author)

  11. Real time estimation of photovoltaic modules characteristics and its application to maximum power point operation

    Energy Technology Data Exchange (ETDEWEB)

    Garrigos, Ausias; Blanes, Jose M.; Carrasco, Jose A. [Area de Tecnologia Electronica, Universidad Miguel Hernandez de Elche, Avda. de la Universidad s/n, 03202 Elche, Alicante (Spain); Ejea, Juan B. [Departamento de Ingenieria Electronica, Universidad de Valencia, Avda. Dr Moliner 50, 46100 Valencia, Valencia (Spain)

    2007-05-15

    In this paper, an approximate curve fitting method for photovoltaic modules is presented. The operation is based on solving a simple solar cell electrical model by a microcontroller in real time. Only four voltage and current coordinates are needed to obtain the solar module parameters and set its operation at maximum power in any conditions of illumination and temperature. Despite its simplicity, this method is suitable for low cost real time applications, as control loop reference generator in photovoltaic maximum power point circuits. The theory that supports the estimator together with simulations and experimental results are presented. (author)

  12. A theory of timing in scintillation counters based on maximum likelihood estimation

    International Nuclear Information System (INIS)

    Tomitani, Takehiro

    1982-01-01

    A theory of timing in scintillation counters based on the maximum likelihood estimation is presented. An optimum filter that minimizes the variance of timing is described. A simple formula to estimate the variance of timing is presented as a function of photoelectron number, scintillation decay constant and the single electron transit time spread in the photomultiplier. The present method was compared with the theory by E. Gatti and V. Svelto. The proposed method was applied to two simple models and rough estimations of potential time resolution of several scintillators are given. The proposed method is applicable to the timing in Cerenkov counters and semiconductor detectors as well. (author)

  13. Periodic capacity management under a lead-time performance constraint

    NARCIS (Netherlands)

    Büyükkaramikli, N.C.; Bertrand, J.W.M.; Ooijen, van H.P.G.

    2013-01-01

    In this paper, we study a production system that operates under a lead-time performance constraint which guarantees the completion of an order before a pre-determined lead-time with a certain probability. The demand arrival times and the service requirements for the orders are random. To reduce the

  14. Superior Reproducibility of the Leading to Leading Edge and Inner to Inner Edge Methods in the Ultrasound Assessment of Maximum Abdominal Aortic Diameter.

    Science.gov (United States)

    Borgbjerg, Jens; Bøgsted, Martin; Lindholt, Jes S; Behr-Rasmussen, Carsten; Hørlyck, Arne; Frøkjær, Jens B

    2018-02-01

    Controversy exists regarding optimal caliper placement in ultrasound assessment of maximum abdominal aortic diameter. This study aimed primarily to determine reproducibility of caliper placement in relation to the aortic wall with the three principal methods: leading to leading edge (LTL), inner to inner edge (ITI), and outer to outer edge (OTO). The secondary aim was to assess the mean difference between the OTO, ITI, and LTL diameters and estimate the impact of using either of these methods on abdominal aortic aneurysm (AAA) prevalence in a screening program. Radiologists (n=18) assessed the maximum antero-posterior abdominal aortic diameter by completing repeated caliper placements with the OTO, LTL, and ITI methods on 50 still abdominal aortic images obtained from an AAA screening program. Inter-observer reproducibility was calculated as the limit of agreement with the mean (LoA), which represents expected deviation of a single observer from the mean of all observers. Intra-observer reproducibility was assessed averaging the LoA for each observer with their repeated measurements. Based on data from an AAA screening trial and the estimated mean differences between the three principal methods, AAA prevalence was estimated using each of the methods. The inter-observer LoA of the OTO, ITI, and LTL was 2.6, 1.9, and 1.9 mm, whereas the intra-observer LoA was 2.0, 1.6, and 1.5 mm, respectively. Mean differences of 5.0 mm were found between OTO and ITI measurements, 2.6 mm between OTO and LTL measurements, and 2.4 mm between LTL and ITI measurements. The prevalence of AAA almost doubled using OTO instead of ITI, while the difference between ITI and LTL was minor (3.3% vs. 4.0% AAA). The study shows superior reproducibility of LTL and ITI compared with the OTO method of caliper placement in ultrasound determination of maximum abdominal aortic diameter, and the choice of caliper placement method significantly affects the prevalence of AAAs in screening programs

  15. Space-Time Chip Equalization for Maximum Diversity Space-Time Block Coded DS-CDMA Downlink Transmission

    Directory of Open Access Journals (Sweden)

    Petré Frederik

    2004-01-01

    Full Text Available In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI. Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input multiple-output (MIMO communication techniques can result in a significant increase in capacity. This paper focuses on space-time block coding (STBC techniques, and aims at combining STBC techniques with the original single-antenna DS-CDMA downlink scheme. This results into the so-called space-time block coded DS-CDMA downlink schemes, many of which have been presented in the past. We focus on a new scheme that enables both the maximum multiantenna diversity and the maximum multipath diversity. Although this maximum diversity can only be collected by maximum likelihood (ML detection, we pursue suboptimal detection by means of space-time chip equalization, which lowers the computational complexity significantly. To design the space-time chip equalizers, we also propose efficient pilot-based methods. Simulation results show improved performance over the space-time RAKE receiver for the space-time block coded DS-CDMA downlink schemes that have been proposed for the UMTS and IS-2000 W-CDMA standards.

  16. The Maximum Entropy Method for Optical Spectrum Analysis of Real-Time TDDFT

    International Nuclear Information System (INIS)

    Toogoshi, M; Kano, S S; Zempo, Y

    2015-01-01

    The maximum entropy method (MEM) is one of the key techniques for spectral analysis. The major feature is that spectra in the low frequency part can be described by the short time-series data. Thus, we applied MEM to analyse the spectrum from the time dependent dipole moment obtained from the time-dependent density functional theory (TDDFT) calculation in real time. It is intensively studied for computing optical properties. In the MEM analysis, however, the maximum lag of the autocorrelation is restricted by the total number of time-series data. We proposed that, as an improved MEM analysis, we use the concatenated data set made from the several-times repeated raw data. We have applied this technique to the spectral analysis of the TDDFT dipole moment of ethylene and oligo-fluorene with n = 8. As a result, the higher resolution can be obtained, which is closer to that of FT with practically time-evoluted data as the same total number of time steps. The efficiency and the characteristic feature of this technique are presented in this paper. (paper)

  17. Short-time maximum entropy method analysis of molecular dynamics simulation: Unimolecular decomposition of formic acid

    Science.gov (United States)

    Takahashi, Osamu; Nomura, Tetsuo; Tabayashi, Kiyohiko; Yamasaki, Katsuyoshi

    2008-07-01

    We performed spectral analysis by using the maximum entropy method instead of the traditional Fourier transform technique to investigate the short-time behavior in molecular systems, such as the energy transfer between vibrational modes and chemical reactions. This procedure was applied to direct ab initio molecular dynamics calculations for the decomposition of formic acid. More reactive trajectories of dehydrolation than those of decarboxylation were obtained for Z-formic acid, which was consistent with the prediction of previous theoretical and experimental studies. Short-time maximum entropy method analyses were performed for typical reactive and non-reactive trajectories. Spectrograms of a reactive trajectory were obtained; these clearly showed the reactant, transient, and product regions, especially for the dehydrolation path.

  18. Time at which the maximum of a random acceleration process is reached

    International Nuclear Information System (INIS)

    Majumdar, Satya N; Rosso, Alberto; Zoia, Andrea

    2010-01-01

    We study the random acceleration model, which is perhaps one of the simplest, yet nontrivial, non-Markov stochastic processes, and is key to many applications. For this non-Markov process, we present exact analytical results for the probability density p(t m |T) of the time t m at which the process reaches its maximum, within a fixed time interval [0, T]. We study two different boundary conditions, which correspond to the process representing respectively (i) the integral of a Brownian bridge and (ii) the integral of a free Brownian motion. Our analytical results are also verified by numerical simulations.

  19. REDUCING LEAD TIME USING FUZZY LOGIC AT JOB SHOP

    Directory of Open Access Journals (Sweden)

    EMİN GÜNDOĞAR

    2000-06-01

    Full Text Available One problem encountering at the job shop scheduling is minimum production size of machine is different from each another. This case increases lead time. A new approach was improved to reduce lead time. In this new approach, the parts, which materials are in stock and orders coming very frequently are assigned to machine to reduce lead time. Due the fact that there are a lot of machine and orders, it is possible to become so1ne probletns. In this paper, fuzzy logic is used to cope with this problem. New approach was simulated at the job sop that has owner 15 machinery and 50 orders. Simulation results showed that new approach reduced lead time between 27.89% and 32.36o/o

  20. STATIONARITY OF ANNUAL MAXIMUM DAILY STREAMFLOW TIME SERIES IN SOUTH-EAST BRAZILIAN RIVERS

    Directory of Open Access Journals (Sweden)

    Jorge Machado Damázio

    2015-08-01

    Full Text Available DOI: 10.12957/cadest.2014.18302The paper presents a statistical analysis of annual maxima daily streamflow between 1931 and 2013 in South-East Brazil focused in detecting and modelling non-stationarity aspects. Flood protection for the large valleys in South-East Brazil is provided by multiple purpose reservoir systems built during 20th century, which design and operation plans has been done assuming stationarity of historical flood time series. Land cover changes and rapidly-increasing level of atmosphere greenhouse gases of the last century may be affecting flood regimes in these valleys so that it can be that nonstationary modelling should be applied to re-asses dam safety and flood control operation rules at the existent reservoir system. Six annual maximum daily streamflow time series are analysed. The time series were plotted together with fitted smooth loess functions and non-parametric statistical tests are performed to check the significance of apparent trends shown by the plots. Non-stationarity is modelled by fitting univariate extreme value distribution functions which location varies linearly with time. Stationarity and non-stationarity modelling are compared with the likelihood ratio statistic. In four of the six analyzed time series non-stationarity modelling outperformed stationarity modelling.Keywords: Stationarity; Extreme Value Distributions; Flood Frequency Analysis; Maximum Likelihood Method.

  1. PROCESS INNOVATION: HOLISTIC SCENARIOS TO REDUCE TOTAL LEAD TIME

    Directory of Open Access Journals (Sweden)

    Alin POSTEUCĂ

    2015-11-01

    Full Text Available The globalization of markets requires continuous development of business holistic scenarios to ensure acceptable flexibility to satisfy customers. Continuous improvement of supply chain supposes continuous improvement of materials and products lead time and flow, material stocks and finished products stocks and increasing the number of suppliers close by as possible. The contribution of our study is to present holistic scenarios of total lead time improvement and innovation by implementing supply chain policy.

  2. Spectral density analysis of time correlation functions in lattice QCD using the maximum entropy method

    International Nuclear Information System (INIS)

    Fiebig, H. Rudolf

    2002-01-01

    We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss the practical issues of the approach

  3. Determination of times maximum insulation in case of internal flooding by pipe break

    International Nuclear Information System (INIS)

    Varas, M. I.; Orteu, E.; Laserna, J. A.

    2014-01-01

    This paper demonstrates the process followed in the preparation of the Manual of floods of Cofrentes NPP to identify the allowed maximum time available to the central in the isolation of a moderate or high energy pipe break, until it affects security (1E) participating in the safe stop of Reactor or in pools of spent fuel cooling-related equipment , and to determine the recommended isolation mode from the point of view of the location of the break or rupture, of the location of the 1E equipment and human factors. (Author)

  4. The Research of Car-Following Model Based on Real-Time Maximum Deceleration

    Directory of Open Access Journals (Sweden)

    Longhai Yang

    2015-01-01

    Full Text Available This paper is concerned with the effect of real-time maximum deceleration in car-following. The real-time maximum acceleration is estimated with vehicle dynamics. It is known that an intelligent driver model (IDM can control adaptive cruise control (ACC well. The disadvantages of IDM at high and constant speed are analyzed. A new car-following model which is applied to ACC is established accordingly to modify the desired minimum gap and structure of the IDM. We simulated the new car-following model and IDM under two different kinds of road conditions. In the first, the vehicles drive on a single road, taking dry asphalt road as the example in this paper. In the second, vehicles drive onto a different road, and this paper analyzed the situation in which vehicles drive from a dry asphalt road onto an icy road. From the simulation, we found that the new car-following model can not only ensure driving security and comfort but also control the steady driving of the vehicle with a smaller time headway than IDM.

  5. Optimal protocol for maximum work extraction in a feedback process with a time-varying potential

    Science.gov (United States)

    Kwon, Chulan

    2017-12-01

    The nonequilibrium nature of information thermodynamics is characterized by the inequality or non-negativity of the total entropy change of the system, memory, and reservoir. Mutual information change plays a crucial role in the inequality, in particular if work is extracted and the paradox of Maxwell's demon is raised. We consider the Brownian information engine where the protocol set of the harmonic potential is initially chosen by the measurement and varies in time. We confirm the inequality of the total entropy change by calculating, in detail, the entropic terms including the mutual information change. We rigorously find the optimal values of the time-dependent protocol for maximum extraction of work both for the finite-time and the quasi-static process.

  6. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.; Alkhalifah, Tariq Ali

    2017-01-01

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  7. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.

    2017-05-26

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  8. Polar coordinated fuzzy controller based real-time maximum-power point control of photovoltaic system

    Energy Technology Data Exchange (ETDEWEB)

    Syafaruddin; Hiyama, Takashi [Department of Computer Science and Electrical Engineering of Kumamoto University, 2-39-1 Kurokami, Kumamoto 860-8555 (Japan); Karatepe, Engin [Department of Electrical and Electronics Engineering of Ege University, 35100 Bornova-Izmir (Turkey)

    2009-12-15

    It is crucial to improve the photovoltaic (PV) system efficiency and to develop the reliability of PV generation control systems. There are two ways to increase the efficiency of PV power generation system. The first is to develop materials offering high conversion efficiency at low cost. The second is to operate PV systems optimally. However, the PV system can be optimally operated only at a specific output voltage and its output power fluctuates under intermittent weather conditions. Moreover, it is very difficult to test the performance of a maximum-power point tracking (MPPT) controller under the same weather condition during the development process and also the field testing is costly and time consuming. This paper presents a novel real-time simulation technique of PV generation system by using dSPACE real-time interface system. The proposed system includes Artificial Neural Network (ANN) and fuzzy logic controller scheme using polar information. This type of fuzzy logic rules is implemented for the first time to operate the PV module at optimum operating point. ANN is utilized to determine the optimum operating voltage for monocrystalline silicon, thin-film cadmium telluride and triple junction amorphous silicon solar cells. The verification of availability and stability of the proposed system through the real-time simulator shows that the proposed system can respond accurately for different scenarios and different solar cell technologies. (author)

  9. Age-Related Differences of Maximum Phonation Time in Patients after Cardiac Surgery

    Directory of Open Access Journals (Sweden)

    Kazuhiro P. Izawa

    2017-12-01

    Full Text Available Background and aims: Maximum phonation time (MPT, which is related to respiratory function, is widely used to evaluate maximum vocal capabilities, because its use is non-invasive, quick, and inexpensive. We aimed to examine differences in MPT by age, following recovery phase II cardiac rehabilitation (CR. Methods: This longitudinal observational study assessed 50 consecutive cardiac patients who were divided into the middle-aged group (<65 years, n = 29 and older-aged group (≥65 years, n = 21. MPTs were measured at 1 and 3 months after cardiac surgery, and were compared. Results: The duration of MPT increased more significantly from month 1 to month 3 in the middle-aged group (19.2 ± 7.8 to 27.1 ± 11.6 s, p < 0.001 than in the older-aged group (12.6 ± 3.5 to 17.9 ± 6.0 s, p < 0.001. However, no statistically significant difference occurred in the % change of MPT from 1 month to 3 months after cardiac surgery between the middle-aged group and older-aged group, respectively (41.1% vs. 42.1%. In addition, there were no significant interactions of MPT in the two groups for 1 versus 3 months (F = 1.65, p = 0.20. Conclusion: Following phase II, CR improved MPT for all cardiac surgery patients.

  10. Age-Related Differences of Maximum Phonation Time in Patients after Cardiac Surgery.

    Science.gov (United States)

    Izawa, Kazuhiro P; Kasahara, Yusuke; Hiraki, Koji; Hirano, Yasuyuki; Watanabe, Satoshi

    2017-12-21

    Background and aims: Maximum phonation time (MPT), which is related to respiratory function, is widely used to evaluate maximum vocal capabilities, because its use is non-invasive, quick, and inexpensive. We aimed to examine differences in MPT by age, following recovery phase II cardiac rehabilitation (CR). Methods: This longitudinal observational study assessed 50 consecutive cardiac patients who were divided into the middle-aged group (<65 years, n = 29) and older-aged group (≥65 years, n = 21). MPTs were measured at 1 and 3 months after cardiac surgery, and were compared. Results: The duration of MPT increased more significantly from month 1 to month 3 in the middle-aged group (19.2 ± 7.8 to 27.1 ± 11.6 s, p < 0.001) than in the older-aged group (12.6 ± 3.5 to 17.9 ± 6.0 s, p < 0.001). However, no statistically significant difference occurred in the % change of MPT from 1 month to 3 months after cardiac surgery between the middle-aged group and older-aged group, respectively (41.1% vs. 42.1%). In addition, there were no significant interactions of MPT in the two groups for 1 versus 3 months (F = 1.65, p = 0.20). Conclusion: Following phase II, CR improved MPT for all cardiac surgery patients.

  11. The impact of project management on nuclear construction lead times

    International Nuclear Information System (INIS)

    Radlaver, M.A.; Bauman, D.S.; Chapel, S.W.

    1985-01-01

    A two-year study of lead times for nuclear power plants found that construction time is affected by six fundamental influences. One of the six is project management. An analysis of construction management teams at 26 nuclear units found that many of the most successful shared five general characteristics: nuclear power experience, skill in project control, adaptability and initiative, commitment to success, and communication and coordination skill

  12. Integrated capacity and inventory management with capacity acquisition lead times

    NARCIS (Netherlands)

    Mincsovics, G.Z.; Tan, T.; Alp, O.

    2009-01-01

    We model a make-to-stock production system that utilizes permanent and contingent capacity to meet non-stationary stochastic demand, where a constant lead time is associated with the acquisition of contingent capacity. We determine the structure of the optimal solution concerning both the

  13. Relative timing of last glacial maximum and late-glacial events in the central tropical Andes

    Science.gov (United States)

    Bromley, Gordon R. M.; Schaefer, Joerg M.; Winckler, Gisela; Hall, Brenda L.; Todd, Claire E.; Rademaker, Kurt M.

    2009-11-01

    Whether or not tropical climate fluctuated in synchrony with global events during the Late Pleistocene is a key problem in climate research. However, the timing of past climate changes in the tropics remains controversial, with a number of recent studies reporting that tropical ice age climate is out of phase with global events. Here, we present geomorphic evidence and an in-situ cosmogenic 3He surface-exposure chronology from Nevado Coropuna, southern Peru, showing that glaciers underwent at least two significant advances during the Late Pleistocene prior to Holocene warming. Comparison of our glacial-geomorphic map at Nevado Coropuna to mid-latitude reconstructions yields a striking similarity between Last Glacial Maximum (LGM) and Late-Glacial sequences in tropical and temperate regions. Exposure ages constraining the maximum and end of the older advance at Nevado Coropuna range between 24.5 and 25.3 ka, and between 16.7 and 21.1 ka, respectively, depending on the cosmogenic production rate scaling model used. Similarly, the mean age of the younger event ranges from 10 to 13 ka. This implies that (1) the LGM and the onset of deglaciation in southern Peru occurred no earlier than at higher latitudes and (2) that a significant Late-Glacial event occurred, most likely prior to the Holocene, coherent with the glacial record from mid and high latitudes. The time elapsed between the end of the LGM and the Late-Glacial event at Nevado Coropuna is independent of scaling model and matches the period between the LGM termination and Late-Glacial reversal in classic mid-latitude records, suggesting that these events in both tropical and temperate regions were in phase.

  14. Ultrafast time-resolved spectroscopy of lead halide perovskite films

    Science.gov (United States)

    Idowu, Mopelola A.; Yau, Sung H.; Varnavski, Oleg; Goodson, Theodore

    2015-09-01

    Recently, lead halide perovskites which are organic-inorganic hybrid structures, have been discovered to be highly efficient as light absorbers. Herein, we show the investigation of the excited state dynamics and emission properties of non-stoichiometric precursor formed lead halide perovskites grown by interdiffusion method using steady-state and time-resolved spectroscopic measurements. The influence of the different ratios of the non-stoichiometric precursor solution was examined. The observed photoluminescence properties were correlated with the femtosecond transient absorption measurements.

  15. Linearized semiclassical initial value time correlation functions with maximum entropy analytic continuation.

    Science.gov (United States)

    Liu, Jian; Miller, William H

    2008-09-28

    The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.

  16. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    Science.gov (United States)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  17. Increasing the maximum daily operation time of MNSR reactor by modifying its cooling system

    International Nuclear Information System (INIS)

    Khamis, I.; Hainoun, A.; Al Halbi, W.; Al Isa, S.

    2006-08-01

    thermal-hydraulic natural convection correlations have been formulated based on a thorough analysis and modeling of the MNSR reactor. The model considers detailed description of the thermal and hydraulic aspects of cooling in the core and vessel. In addition, determination of pressure drop was made through an elaborate balancing of the overall pressure drop in the core against the sum of all individual channel pressure drops employing an iterative scheme. Using this model, an accurate estimation of various timely core-averaged hydraulic parameters such as generated power, hydraulic diameters, flow cross area, ... etc. for each one of the ten-fuel circles in the core can be made. Furthermore, distribution of coolant and fuel temperatures, including maximum fuel temperature and its location in the core, can now be determined. Correlation among core-coolant average temperature, reactor power, and core-coolant inlet temperature, during both steady and transient cases, have been established and verified against experimental data. Simulating various operating condition of MNSR, good agreement is obtained for at different power levels. Various schemes of cooling have been investigated for the purpose of assessing potential benefits on the operational characteristics of the syrian MNSR reactor. A detailed thermal hydraulic model for the analysis of MNSR has been developed. The analysis shows that an auxiliary cooling system, for the reactor vessel or installed in the pool which surrounds the lower section of the reactor vessel, will significantly offset the consumption of excess reactivity due to the negative reactivity temperature coefficient. Hence, the maximum operating time of the reactor is extended. The model considers detailed description of the thermal and hydraulic aspects of cooling the core and its surrounding vessel. Natural convection correlations have been formulated based on a thorough analysis and modeling of the MNSR reactor. The suggested 'micro model

  18. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    Science.gov (United States)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  19. The timing of the maximum extent of the Rhone Glacier at Wangen a.d. Aare

    Energy Technology Data Exchange (ETDEWEB)

    Ivy-Ochs, S.; Schluechter, C. [Bern Univ. (Switzerland); Kubik, P.W. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Beer, J. [EAWAG, Duebendorf (Switzerland)

    1997-09-01

    Erratic blocks found in the region of Wangen a.d. Aare delineate the maximum position of the Solothurn lobe of the Rhone Glacier. {sup 10}Be and {sup 26}Al exposure ages of three of these blocks show that the glacier withdraw from its maximum position at or slightly before 20,000{+-}1800 years ago. (author) 1 fig., 5 refs.

  20. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    2008-01-01

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm...

  1. Time evolution of laser-induced breakdown spectrometry of lead

    International Nuclear Information System (INIS)

    Li Zhongwen; Zhang Jianhui

    2011-01-01

    The plasma have been generated by a pulsed Nd: YAG laser at the fundamental wavelength of 1.06 μm ablating a metal lead target in air at atmospheric pressure, and the time resolved emission spectra were gotten. Time evolution of electron temperatures were measured according to the wavelength and relative intensity of spectra; then the electron densities were obtained from the Stark broadening of Pb-line; the time evolution of electron temperatures and electron densities along the direction plumbing the target surface were imaged. The analysis of results showed that electron temperature averaged to 14500 K, electron densities up to 10 17 cm -3 . The characteristics of time evolution of electron temperature and electron density were qualitatively explained from the aspect of generation mechanism of laser-induced plasmas. (authors)

  2. Maximum leaf conductance driven by CO2 effects on stomatal size and density over geologic time.

    Science.gov (United States)

    Franks, Peter J; Beerling, David J

    2009-06-23

    Stomatal pores are microscopic structures on the epidermis of leaves formed by 2 specialized guard cells that control the exchange of water vapor and CO(2) between plants and the atmosphere. Stomatal size (S) and density (D) determine maximum leaf diffusive (stomatal) conductance of CO(2) (g(c(max))) to sites of assimilation. Although large variations in D observed in the fossil record have been correlated with atmospheric CO(2), the crucial significance of similarly large variations in S has been overlooked. Here, we use physical diffusion theory to explain why large changes in S necessarily accompanied the changes in D and atmospheric CO(2) over the last 400 million years. In particular, we show that high densities of small stomata are the only way to attain the highest g(cmax) values required to counter CO(2)"starvation" at low atmospheric CO(2) concentrations. This explains cycles of increasing D and decreasing S evident in the fossil history of stomata under the CO(2) impoverished atmospheres of the Permo-Carboniferous and Cenozoic glaciations. The pattern was reversed under rising atmospheric CO(2) regimes. Selection for small S was crucial for attaining high g(cmax) under falling atmospheric CO(2) and, therefore, may represent a mechanism linking CO(2) and the increasing gas-exchange capacity of land plants over geologic time.

  3. Bayesian Maximum Entropy space/time estimation of surface water chloride in Maryland using river distances.

    Science.gov (United States)

    Jat, Prahlad; Serre, Marc L

    2016-12-01

    Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R 2 by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles. Copyright © 2016. Published by Elsevier Ltd.

  4. Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code

    Directory of Open Access Journals (Sweden)

    Adel Ahmadi

    2015-01-01

    Full Text Available Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML detection approach for quasi-orthogonal space-time block code (QOSTBC. The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric into independent positive parts and a positive interference part. Search spaces of symbols are substantially reduced by employing the independent parts and statistics of noise. Symbols within the search spaces are successively evaluated until the metric is minimized. Simulation results confirm that the proposed decoder’s performance is superior to many of the recently published state-of-the-art solutions in terms of complexity level. More specifically, it was possible to verify that application of the new algorithms with 1024-QAM would decrease the computational complexity compared to state-of-the-art solution with 16-QAM.

  5. Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications

    Energy Technology Data Exchange (ETDEWEB)

    Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz

    2016-12-01

    This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledge of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.

  6. A Maximum Entropy-Based Chaotic Time-Variant Fragile Watermarking Scheme for Image Tampering Detection

    Directory of Open Access Journals (Sweden)

    Guo-Jheng Yang

    2013-08-01

    Full Text Available The fragile watermarking technique is used to protect intellectual property rights while also providing security and rigorous protection. In order to protect the copyright of the creators, it can be implanted in some representative text or totem. Because all of the media on the Internet are digital, protection has become a critical issue, and determining how to use digital watermarks to protect digital media is thus the topic of our research. This paper uses the Logistic map with parameter u = 4 to generate chaotic dynamic behavior with the maximum entropy 1. This approach increases the security and rigor of the protection. The main research target of information hiding is determining how to hide confidential data so that the naked eye cannot see the difference. Next, we introduce one method of information hiding. Generally speaking, if the image only goes through Arnold’s cat map and the Logistic map, it seems to lack sufficient security. Therefore, our emphasis is on controlling Arnold’s cat map and the initial value of the chaos system to undergo small changes and generate different chaos sequences. Thus, the current time is used to not only make encryption more stringent but also to enhance the security of the digital media.

  7. Time-varying block codes for synchronisation errors: maximum a posteriori decoder and practical issues

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.

  8. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  9. Strong Maximum Principle for Multi-Term Time-Fractional Diffusion Equations and its Application to an Inverse Source Problem

    OpenAIRE

    Liu, Yikan

    2015-01-01

    In this paper, we establish a strong maximum principle for fractional diffusion equations with multiple Caputo derivatives in time, and investigate a related inverse problem of practical importance. Exploiting the solution properties and the involved multinomial Mittag-Leffler functions, we improve the weak maximum principle for the multi-term time-fractional diffusion equation to a stronger one, which is parallel to that for its single-term counterpart as expected. As a direct application, w...

  10. FlowMax: A Computational Tool for Maximum Likelihood Deconvolution of CFSE Time Courses.

    Directory of Open Access Journals (Sweden)

    Maxim Nikolaievich Shokhirev

    Full Text Available The immune response is a concerted dynamic multi-cellular process. Upon infection, the dynamics of lymphocyte populations are an aggregate of molecular processes that determine the activation, division, and longevity of individual cells. The timing of these single-cell processes is remarkably widely distributed with some cells undergoing their third division while others undergo their first. High cell-to-cell variability and technical noise pose challenges for interpreting popular dye-dilution experiments objectively. It remains an unresolved challenge to avoid under- or over-interpretation of such data when phenotyping gene-targeted mouse models or patient samples. Here we develop and characterize a computational methodology to parameterize a cell population model in the context of noisy dye-dilution data. To enable objective interpretation of model fits, our method estimates fit sensitivity and redundancy by stochastically sampling the solution landscape, calculating parameter sensitivities, and clustering to determine the maximum-likelihood solution ranges. Our methodology accounts for both technical and biological variability by using a cell fluorescence model as an adaptor during population model fitting, resulting in improved fit accuracy without the need for ad hoc objective functions. We have incorporated our methodology into an integrated phenotyping tool, FlowMax, and used it to analyze B cells from two NFκB knockout mice with distinct phenotypes; we not only confirm previously published findings at a fraction of the expended effort and cost, but reveal a novel phenotype of nfkb1/p105/50 in limiting the proliferative capacity of B cells following B-cell receptor stimulation. In addition to complementing experimental work, FlowMax is suitable for high throughput analysis of dye dilution studies within clinical and pharmacological screens with objective and quantitative conclusions.

  11. Surface of Maximums of AR(2 Process Spectral Densities and its Application in Time Series Statistics

    Directory of Open Access Journals (Sweden)

    Alexander V. Ivanov

    2017-09-01

    Conclusions. The obtained formula of surface of maximums of noise spectral densities gives an opportunity to realize for which values of AR(2 process characteristic polynomial coefficients it is possible to look for greater rate of convergence to zero of the probabilities of large deviations of the considered estimates.

  12. Monte Carlo Maximum Likelihood Estimation for Generalized Long-Memory Time Series Models

    NARCIS (Netherlands)

    Mesters, G.; Koopman, S.J.; Ooms, M.

    2016-01-01

    An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating

  13. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua

    2016-10-19

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  14. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua; Wen, Chih-Yung; Parsani, Matteo; Shu, Chi-Wang

    2016-01-01

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  15. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    OpenAIRE

    Liu, Peng; Wang, Xiaoli

    2017-01-01

    A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due ...

  16. Defense Inventory: Opportunities Exist to Improve the Management of DOD's Acquisition Lead Times for Spare Parts

    National Research Council Canada - National Science Library

    2007-01-01

    .... Management of inventory acquisition lead times is important in maintaining cost-effective inventories, budgeting, and having material available when needed, as lead times are DOD's best estimate...

  17. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    Science.gov (United States)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  18. Detection of surface electromyography recording time interval without muscle fatigue effect for biceps brachii muscle during maximum voluntary contraction.

    Science.gov (United States)

    Soylu, Abdullah Ruhi; Arpinar-Avsar, Pinar

    2010-08-01

    The effects of fatigue on maximum voluntary contraction (MVC) parameters were examined by using force and surface electromyography (sEMG) signals of the biceps brachii muscles (BBM) of 12 subjects. The purpose of the study was to find the sEMG time interval of the MVC recordings which is not affected by the muscle fatigue. At least 10s of force and sEMG signals of BBM were recorded simultaneously during MVC. The subjects reached the maximum force level within 2s by slightly increasing the force, and then contracted the BBM maximally. The time index of each sEMG and force signal were labeled with respect to the time index of the maximum force (i.e. after the time normalization, each sEMG or force signal's 0s time index corresponds to maximum force point). Then, the first 8s of sEMG and force signals were divided into 0.5s intervals. Mean force, median frequency (MF) and integrated EMG (iEMG) values were calculated for each interval. Amplitude normalization was performed by dividing the force signals to their mean values of 0s time intervals (i.e. -0.25 to 0.25s). A similar amplitude normalization procedure was repeated for the iEMG and MF signals. Statistical analysis (Friedman test with Dunn's post hoc test) was performed on the time and amplitude normalized signals (MF, iEMG). Although the ANOVA results did not give statistically significant information about the onset of the muscle fatigue, linear regression (mean force vs. time) showed a decreasing slope (Pearson-r=0.9462, pfatigue starts after the 0s time interval as the muscles cannot attain their peak force levels. This implies that the most reliable interval for MVC calculation which is not affected by the muscle fatigue is from the onset of the EMG activity to the peak force time. Mean, SD, and range of this interval (excluding 2s gradual increase time) for 12 subjects were 2353, 1258ms and 536-4186ms, respectively. Exceeding this interval introduces estimation errors in the maximum amplitude calculations

  19. Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains

    Science.gov (United States)

    Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.

    2018-01-01

    We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.

  20. Optimization of NANOGrav's time allocation for maximum sensitivity to single sources

    International Nuclear Information System (INIS)

    Christy, Brian; Anella, Ryan; Lommen, Andrea; Camuccio, Richard; Handzo, Emma; Finn, Lee Samuel

    2014-01-01

    Pulsar timing arrays (PTAs) are a collection of precisely timed millisecond pulsars (MSPs) that can search for gravitational waves (GWs) in the nanohertz frequency range by observing characteristic signatures in the timing residuals. The sensitivity of a PTA depends on the direction of the propagating GW source, the timing accuracy of the pulsars, and the allocation of the available observing time. The goal of this paper is to determine the optimal time allocation strategy among the MSPs in the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) for a single source of GW under a particular set of assumptions. We consider both an isotropic distribution of sources across the sky and a specific source in the Virgo cluster. This work improves on previous efforts by modeling the effect of intrinsic spin noise for each pulsar. We find that, in general, the array is optimized by maximizing time spent on the best-timed pulsars, with sensitivity improvements typically ranging from a factor of 1.5 to 4.

  1. Separation of Stochastic and Deterministic Information from Seismological Time Series with Nonlinear Dynamics and Maximum Entropy Methods

    International Nuclear Information System (INIS)

    Gutierrez, Rafael M.; Useche, Gina M.; Buitrago, Elias

    2007-01-01

    We present a procedure developed to detect stochastic and deterministic information contained in empirical time series, useful to characterize and make models of different aspects of complex phenomena represented by such data. This procedure is applied to a seismological time series to obtain new information to study and understand geological phenomena. We use concepts and methods from nonlinear dynamics and maximum entropy. The mentioned method allows an optimal analysis of the available information

  2. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  3. ANALYTICAL ESTIMATION OF MINIMUM AND MAXIMUM TIME EXPENDITURES OF PASSENGERS AT AN URBAN ROUTE STOP

    Directory of Open Access Journals (Sweden)

    Gorbachov, P.

    2013-01-01

    Full Text Available This scientific paper deals with the problem related to the definition of average time spent by passengers while waiting for transport vehicles at urban stops as well as the results of analytical modeling of this value at traffic schedule unknown to the passengers and of two options of the vehicle traffic management on the given route.

  4. Nurses: Leading change one day at a time.

    Science.gov (United States)

    Chubbs, Katherine

    2014-06-01

    There has been enormous progress in nursing, and that progress did not come without change. Nurses have two choices: to be a part of developing and leading the change, or to have change happen to them. Copyright © 2014 Longwoods Publishing.

  5. Time to reach tacrolimus maximum blood concentration,mean residence time, and acute renal allograft rejection: an open-label, prospective, pharmacokinetic study in adult recipients.

    Science.gov (United States)

    Kuypers, Dirk R J; Vanrenterghem, Yves

    2004-11-01

    The aims of this study were to determine whether disposition-related pharmacokinetic parameters such as T(max) and mean residence time (MRT) could be used as predictors of clinical efficacy of tacrolimus in renal transplant recipients, and to what extent these parameters would be influenced by clinical variables. We previously demonstrated, in a prospective pharmacokinetic study in de novo renal allograft recipients, that patients who experienced early acute rejection did not differ from patients free from rejection in terms of tacrolimus pharmacokinetic exposure parameters (dose interval AUC, preadministration trough blood concentration, C(max), dose). However, recipients with acute rejection reached mean (SD) tacrolimus T(max) significantly faster than those who were free from rejection (0.96 [0.56] hour vs 1.77 [1.06] hours; P clearance nor T(1/2) could explain this unusual finding, we used data from the previous study to calculate MRT from the concentration-time curves. As part of the previous study, 100 patients (59 male, 41 female; mean [SD] age, 51.4 [13.8] years;age range, 20-75 years) were enrolled in the study The calculated MRT was significantly shorter in recipients with acute allograft rejection (11.32 [031] hours vs 11.52 [028] hours; P = 0.02), just like T(max) was an independent risk factor for acute rejection in a multivariate logistic regression model (odds ratio, 0.092 [95% CI, 0.014-0.629]; P = 0.01). Analyzing the impact of demographic, transplantation-related, and biochemical variables on MRT, we found that increasing serum albumin and hematocrit concentrations were associated with a prolonged MRT (P calculated MRT were associated with a higher incidence of early acute graft rejection. These findings suggest that a shorter transit time of tacrolimus in certain tissue compartments, rather than failure to obtain a maximum absolute tacrolimus blood concentration, might lead to inadequate immunosuppression early after transplantation.

  6. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    Directory of Open Access Journals (Sweden)

    Peng Liu

    2017-01-01

    Full Text Available A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due date. The objective is to maximize the multiplication of their rational positive cooperative profits. A division of those jobs should be negotiated to yield a reasonable cooperative profit allocation scheme acceptable to them. We propose the sufficient and necessary conditions for the problems to have positive integer solution.

  7. Timing A Pulsed Thin Film Pyroelectric Generator For Maximum Power Density

    International Nuclear Information System (INIS)

    Smith, A.N.; Hanrahan, B.M.; Neville, C.J.; Jankowski, N.R

    2016-01-01

    Pyroelectric thermal-to-electric energy conversion is accomplished by a cyclic process of thermally-inducing polarization changes in the material under an applied electric field. The pyroelectric MEMS device investigated consisted of a thin film PZT capacitor with platinum bottom and iridium oxide top electrodes. Electric fields between 1-20 kV/cm with a 30% duty cycle and frequencies from 0.1 - 100 Hz were tested with a modulated continuous wave IR laser with a duty cycle of 20% creating temperature swings from 0.15 - 26 °C on the pyroelectric receiver. The net output power of the device was highly sensitive to the phase delay between the laser power and the applied electric field. A thermal model was developed to predict and explain the power loss associated with finite charge and discharge times. Excellent agreement was achieved between the theoretical model and the experiment results for the measured power density versus phase delay. Limitations on the charging and discharging rates result in reduced power and lower efficiency due to a reduced net work per cycle. (paper)

  8. The Sidereal Time Variations of the Lorentz Force and Maximum Attainable Speed of Electrons

    Science.gov (United States)

    Nowak, Gabriel; Wojtsekhowski, Bogdan; Roblin, Yves; Schmookler, Barak

    2016-09-01

    The Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab produces electrons that orbit through a known magnetic system. The electron beam's momentum can be determined through the radius of the beam's orbit. This project compares the beam orbit's radius while travelling in a transverse magnetic field with theoretical predictions from special relativity, which predict a constant beam orbit radius. Variations in the beam orbit's radius are found by comparing the beam's momentum entering and exiting a magnetic arc. Beam position monitors (BPMs) provide the information needed to calculate the beam momentum. Multiple BPM's are included in the analysis and fitted using the method of least squares to decrease statistical uncertainty. Preliminary results from data collected over a 24 hour period show that the relative momentum change was less than 10-4. Further study will be conducted including larger time spans and stricter cuts applied to the BPM data. The data from this analysis will be used in a larger experiment attempting to verify special relativity. While the project is not traditionally nuclear physics, it involves the same technology (the CEBAF accelerator) and the same methods (ROOT) as a nuclear physics experiment. DOE SULI Program.

  9. Do Declining Discount Rates lead to Time Inconsistent Economic Advice?

    DEFF Research Database (Denmark)

    Hansen, Anders Chr.

    2006-01-01

    This paper addresses the risk of time inconsistency in economic appraisals related to the use of hyperbolic discounting (declining discount rates) instead of exponential discounting (constant discount rate). Many economists are uneasy about the prospects of potential time inconsistency. The paper...

  10. Use of queue modelling in the analysis of elective patient treatment governed by a maximum waiting time policy

    DEFF Research Database (Denmark)

    Kozlowski, Dawid; Worthington, Dave

    2015-01-01

    chain and discrete event simulation models, to provide an insightful analysis of the public hospital performance under the policy rules. The aim of this paper is to support the enhancement of the quality of elective patient care, to be brought about by better understanding of the policy implications...... on the utilization of public hospital resources. This paper illustrates the use of a queue modelling approach in the analysis of elective patient treatment governed by the maximum waiting time policy. Drawing upon the combined strengths of analytic and simulation approaches we develop both continuous-time Markov...

  11. Land processes lead to surprising patterns in atmospheric residence time

    Science.gov (United States)

    van der Ent, R.; Tuinenburg, O.

    2017-12-01

    Our research using atmospheric moisture tracking methods shows that the global average atmospheric residence time of evaporation is 8-10 days. This residence time appears to be Gamma distributed with a higher probability of shorter than average residence times and a long tail. As a consequence the median of this residence time is around 5 days. In some places in the world the first few hours/days after evaporation there seems to be a little chance for a moisture particle to precipitate again, which is reflected by a Gamma distribution having a shape parameter below 1. In this study we present global maps of this parameter using different datasets (GLDAS and ERA-Interim). The shape parameter is as such also a measure for the land-atmospheric coupling strength along the path of the atmospheric water particle. We also find that different evaporation components: canopy interception, soil evaporation and transpiration appear to have different residence time distributions. We find a daily cycle in the residence time distribution over land, which is not present over the oceans. In this paper we will show which of the evaporation components is mainly responsible for this daily pattern and thus exhibits the largest daily cycle of land-atmosphere coupling strength.

  12. Lead-Time Models Should Not Be Used to Estimate Overdiagnosis in Cancer Screening

    DEFF Research Database (Denmark)

    Zahl, Per-Henrik; Jørgensen, Karsten Juhl; Gøtzsche, Peter C

    2014-01-01

    screening--the excess-incidence approach and the lead-time approach--that rely on two different lead-time definitions. Overdiagnosis when screening with mammography has varied from 0 to 75 %. We have explained that these differences are mainly caused by using different definitions and methods......Lead-time can mean two different things: Clinical lead-time is the lead-time for clinically relevant tumors; that is, those that are not overdiagnosed. Model-based lead-time is a theoretical construct where the time when the tumor would have caused symptoms is not limited by the person's death....... It is the average time at which the diagnosis is brought forward for both clinically relevant and overdiagnosed cancers. When screening for breast cancer, clinical lead-time is about 1 year, while model-based lead-time varies from 2 to 7 years. There are two different methods to calculate overdiagnosis in cancer...

  13. Space-Time Chip Equalization for Maximum Diversity Space-Time Block Coded DS-CDMA Downlink Transmission

    NARCIS (Netherlands)

    Leus, G.; Petré, F.; Moonen, M.

    2004-01-01

    In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI). Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input

  14. Real Time Corrosion Monitoring in Lead and Lead-Bismuth Systems

    Energy Technology Data Exchange (ETDEWEB)

    James F. Stubbins; Alan Bolind; Ziang Chen

    2010-02-25

    The objective of this research program is to develop a real-time, in situ corrosion monitoring technique for flowing liquid Pb and eutectic PbBi (LBE) systems in a temperature range of 400 to 650 C. These conditions are relevant to future liquid metal cooled fast reactor operating parameters. THis program was aligned with the Gen IV Reactor initiative to develp technologies to support the design and opertion of a Pb or LBE-cooled fast reactor. The ability to monitor corrosion for protection of structural components is a high priority issue for the safe and prolonged operation of advanced liquid metal fast reactor systems. In those systems, protective oxide layers are intentionally formed and maintained to limit corrosion rates during operation. This program developed a real time, in situ corrosion monitoring tecnique using impedance spectroscopy (IS) technology.

  15. First-passage time: a conception leading to superstatistics

    Directory of Open Access Journals (Sweden)

    V.V.Ryazanov

    2006-01-01

    Full Text Available To describe the nonequilibrium states of a system we introduce a new thermodynamic parameter -- the lifetime (the first passage time of a system. The statistical distributions that can be obtained out of the mesoscopic description characterizing the behaviour of a system by specifying the stochastic processes are written. Superstatistics, introduced in [Beck C., Cohen E.G.D., Physica A, 2003, 322A, 267] as fluctuating quantities of intensive thermodynamical parameters, are obtained from statistical distribution with lifetime (random time to system degeneracy as thermodynamical parameter (and also generalization of superstatistics.

  16. Anticipation of lead time performance in supply chain operations planning

    NARCIS (Netherlands)

    Jansen, M.M.; Kok, de A.G.; Fransoo, J.C.

    2009-01-01

    Whilst being predominantly used in practice, linear and mixed integer programming models for Supply Chain Operations Planning (SCOP) are not well suited for modeling the relationship between the release of work to a production unit and its output over time. In this paper we propose an approach where

  17. Influence of dispatching rules on average production lead time for multi-stage production systems.

    Science.gov (United States)

    Hübl, Alexander; Jodlbauer, Herbert; Altendorfer, Klaus

    2013-08-01

    In this paper the influence of different dispatching rules on the average production lead time is investigated. Two theorems based on covariance between processing time and production lead time are formulated and proved theoretically. Theorem 1 links the average production lead time to the "processing time weighted production lead time" for the multi-stage production systems analytically. The influence of different dispatching rules on average lead time, which is well known from simulation and empirical studies, can be proved theoretically in Theorem 2 for a single stage production system. A simulation study is conducted to gain more insight into the influence of dispatching rules on average production lead time in a multi-stage production system. We find that the "processing time weighted average production lead time" for a multi-stage production system is not invariant of the applied dispatching rule and can be used as a dispatching rule independent indicator for single-stage production systems.

  18. Lead Times – Their Behavior and the Impact on Planning and Control in Supply Chains

    Directory of Open Access Journals (Sweden)

    Nielsen Peter

    2017-06-01

    Full Text Available Lead times and their nature have received limited interest in literature despite their large impact on the performance and the management of supply chains. This paper presents a method and a case implementation of the same, to establish the behavior of real lead times in supply chains. The paper explores the behavior of lead times and illustrates how in one particular case they can and should be considered to be independent and identically distributed (i.i.d.. The conclusion is also that the stochastic nature of the lead times contributes more to lead time demand variance than demand variance.

  19. Neutron CT with a multi-detector system leading to drastical reduction of the measuring time

    International Nuclear Information System (INIS)

    Hehn, G.; Pfister, G.; Schatz, A.; Goebel, J.; Kofler, R.

    1993-09-01

    By means of numerical simulation methods and their verification with measurements it could be shown that such a detector system can be realized for a line beam and 1-2 detectors per cm. With the maximum available beam width of the fast neutron field at the FRM approximately 20 detectors can be used leading to a reduction of the measuring time to 0,5 - 1 hour. A multi detector system for a line beam of thermal neutrons was constructed, tested and used for CT-measurements. This detector system for the measurement of thinner layers with better spatial resolution could be realized. The electronic discrimination between neutrons and gamma rays has been improved. This discrimination was used in all CT-measurements to get transmission values of both kinds of radiation and to reconstruct to complementary CT-images. The use of a polyenergetic radiation causes spectral shifts in the transmission spectrum leading to artifacts in the reconstructed CT-image. The transmission values must be spectral corrected before image reconstruction, because the image artifacts complicate the image evaluation or make it impossible. A new energy selective procedure for the online spectral correction was developed. This method is based on the concept to measure additionally to the integral transmission value his pulse height spectrum and to do the correction depending on the changes in this pulse height spectrum. (orig./HP) [de

  20. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    Science.gov (United States)

    Almog, Assaf; Garlaschelli, Diego

    2014-09-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.

  1. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    International Nuclear Information System (INIS)

    Almog, Assaf; Garlaschelli, Diego

    2014-01-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information. (paper)

  2. ?Just-in-Time? Battery Charge Depletion Control for PHEVs and E-REVs for Maximum Battery Life

    Energy Technology Data Exchange (ETDEWEB)

    DeVault, Robert C [ORNL

    2009-01-01

    Conventional methods of vehicle operation for Plug-in Hybrid Vehicles first discharge the battery to a minimum State of Charge (SOC) before switching to charge sustaining operation. This is very demanding on the battery, maximizing the number of trips ending with a depleted battery and maximizing the distance driven on a depleted battery over the vehicle s life. Several methods have been proposed to reduce the number of trips ending with a deeply discharged battery and also eliminate the need for extended driving on a depleted battery. An optimum SOC can be maintained for long battery life before discharging the battery so that the vehicle reaches an electric plug-in destination just as the battery reaches the minimum operating SOC. These Just-in-Time methods provide maximum effective battery life while getting virtually the same electricity from the grid.

  3. Comparison of Inventory Systems with Service, Positive Lead-Time, Loss, and Retrial of Customers

    Directory of Open Access Journals (Sweden)

    A. Krishnamoorthy

    2007-01-01

    Full Text Available We analyze and compare three (s,S inventory systems with positive service time and retrial of customers. In all of these systems, arrivals of customers form a Poisson process and service times are exponentially distributed. When the inventory level depletes to s due to services, an order of replenishment is placed. The lead-time follows an exponential distribution. In model I, an arriving customer, finding the inventory dry or server busy, proceeds to an orbit with probability γ and is lost forever with probability (1−γ. A retrial customer in the orbit, finding the inventory dry or server busy, returns to the orbit with probability δ and is lost forever with probability (1−δ. In addition to the description in model I, we provide a buffer of varying (finite capacity equal to the current inventory level for model II and another having capacity equal to the maximum inventory level S for model III. In models II and III, an arriving customer, finding the buffer full, proceeds to an orbit with probability γ and is lost forever with probability (1−γ. A retrial customer in the orbit, finding the buffer full, returns to the orbit with probability δ and is lost forever with probability (1−δ. In all these models, the interretrial times are exponentially distributed with linear rate. Using matrix-analytic method, we study these inventory models. Some measures of the system performance in the steady state are derived. A suitable cost function is defined for all three cases and analyzed using graphical illustrations.

  4. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Science.gov (United States)

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  5. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Directory of Open Access Journals (Sweden)

    Kyungsoo Kim

    2016-06-01

    Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.

  6. Quality Improvement, Inventory Management, Lead Time Reduction and Production Scheduling in High-Mix Manufacturing Environments

    Science.gov (United States)

    2017-01-13

    Quality Improvement , Inventory Management, Lead Time Reduction and Production Scheduling in High-mix Manufacturing Environments by Sean Daigle B.S...Mechanical Engineering Chairman, Department Committee on Graduate Theses 2 Quality Improvement , Inventory Management, Lead Time Reduction and... Production Scheduling in High-mix Manufacturing Environments by Sean Daigle Submitted to the Department of Mechanical Engineering on January 13, 2017, in

  7. Estimation of daily maximum and minimum air temperatures in urban landscapes using MODIS time series satellite data

    Science.gov (United States)

    Yoo, Cheolhee; Im, Jungho; Park, Seonyoung; Quackenbush, Lindi J.

    2018-03-01

    Urban air temperature is considered a significant variable for a variety of urban issues, and analyzing the spatial patterns of air temperature is important for urban planning and management. However, insufficient weather stations limit accurate spatial representation of temperature within a heterogeneous city. This study used a random forest machine learning approach to estimate daily maximum and minimum air temperatures (Tmax and Tmin) for two megacities with different climate characteristics: Los Angeles, USA, and Seoul, South Korea. This study used eight time-series land surface temperature (LST) data from Moderate Resolution Imaging Spectroradiometer (MODIS), with seven auxiliary variables: elevation, solar radiation, normalized difference vegetation index, latitude, longitude, aspect, and the percentage of impervious area. We found different relationships between the eight time-series LSTs with Tmax/Tmin for the two cities, and designed eight schemes with different input LST variables. The schemes were evaluated using the coefficient of determination (R2) and Root Mean Square Error (RMSE) from 10-fold cross-validation. The best schemes produced R2 of 0.850 and 0.777 and RMSE of 1.7 °C and 1.2 °C for Tmax and Tmin in Los Angeles, and R2 of 0.728 and 0.767 and RMSE of 1.1 °C and 1.2 °C for Tmax and Tmin in Seoul, respectively. LSTs obtained the day before were crucial for estimating daily urban air temperature. Estimated air temperature patterns showed that Tmax was highly dependent on the geographic factors (e.g., sea breeze, mountains) of the two cities, while Tmin showed marginally distinct temperature differences between built-up and vegetated areas in the two cities.

  8. Integrated vendor-buyer inventory models with inflation and time value of money in controllable lead time

    Directory of Open Access Journals (Sweden)

    Prashant Jindal

    2016-01-01

    Full Text Available In the global critical economic scenario, inflation plays a vital role in deciding optimal pricing of goods in any business entity. This article presents two single-vendor single-buyer integrated supply chain inventory models with inflation and time value of money. Shortage is allowed during the lead-time and it is partially backlogged. Lead time is controllable and can be reduced using crashing cost. In the first model, we consider the demand of lead time follows a normal distribution, and in the second model, it is considered distribution-free. For both cases, our objective is to minimize the integrated system cost by simultaneously optimizing the order quantity, safety factor, lead time and number of lots. The discounted cash flow and classical optimization technique are used to derive the optimal solution for both cases. Numerical examples including the sensitivity analysis of system parameters is provided to validate the results of the supply chain models.

  9. A note on the catch-up time method for estimating lead or sojourn time in prostate cancer screening

    NARCIS (Netherlands)

    G. Draisma (Gerrit); J.M. van Rosmalen (Joost)

    2013-01-01

    textabstractModels of cancer screening assume that cancers are detectable by screening before being diagnosed clinically through symptoms. The duration of this preclinical phase is called sojourn time, and it determines how much diagnosis might be advanced in time by the screening test (lead time).

  10. Studying DDT Susceptibility at Discriminating Time Intervals Focusing on Maximum Limit of Exposure Time Survived by DDT Resistant Phlebotomus argentipes (Diptera: Psychodidae): an Investigative Report.

    Science.gov (United States)

    Rama, Aarti; Kesari, Shreekant; Das, Pradeep; Kumar, Vijay

    2017-07-24

    Extensive application of routine insecticide i.e., dichlorodiphenyltrichloroethane (DDT) to control Phlebotomus argentipes (Diptera: Psychodidae), the proven vector of visceral leishmaniasis in India, had evoked the problem of resistance/tolerance against DDT, eventually nullifying the DDT dependent strategies to control this vector. Because tolerating an hour-long exposure to DDT is not challenging enough for the resistant P. argentipes, estimating susceptibility by exposing sand flies to insecticide for just an hour becomes a trivial and futile task.Therefore, this bioassay study was carried out to investigate the maximum limit of exposure time to which DDT resistant P. argentipes can endure the effect of DDT for their survival. The mortality rate of laboratory-reared DDT resistant strain P. argentipes exposed to DDT was studied at discriminating time intervals of 60 min and it was concluded that highly resistant sand flies could withstand up to 420 min of exposure to this insecticide. Additionally, the lethal time for female P. argentipes was observed to be higher than for males suggesting that they are highly resistant to DDT's toxicity. Our results support the monitoring of tolerance limit with respect to time and hence points towards an urgent need to change the World Health Organization's protocol for susceptibility identification in resistant P. argentipes.

  11. Time course of recovery following resistance training leading or not to failure.

    Science.gov (United States)

    Morán-Navarro, Ricardo; Pérez, Carlos E; Mora-Rodríguez, Ricardo; de la Cruz-Sánchez, Ernesto; González-Badillo, Juan José; Sánchez-Medina, Luis; Pallarés, Jesús G

    2017-12-01

    To describe the acute and delayed time course of recovery following resistance training (RT) protocols differing in the number of repetitions (R) performed in each set (S) out of the maximum possible number (P). Ten resistance-trained men undertook three RT protocols [S × R(P)]: (1) 3 × 5(10), (2) 6 × 5(10), and (3) 3 × 10(10) in the bench press (BP) and full squat (SQ) exercises. Selected mechanical and biochemical variables were assessed at seven time points (from - 12 h to + 72 h post-exercise). Countermovement jump height (CMJ) and movement velocity against the load that elicited a 1 m s -1 mean propulsive velocity (V1) and 75% 1RM in the BP and SQ were used as mechanical indicators of neuromuscular performance. Training to muscle failure in each set [3 × 10(10)], even when compared to completing the same total exercise volume [6 × 5(10)], resulted in a significantly higher acute decline of CMJ and velocity against the V1 and 75% 1RM loads in both BP and SQ. In contrast, recovery from the 3 × 5(10) and 6 × 5(10) protocols was significantly faster between 24 and 48 h post-exercise compared to 3 × 10(10). Markers of acute (ammonia, growth hormone) and delayed (creatine kinase) fatigue showed a markedly different course of recovery between protocols, suggesting that training to failure slows down recovery up to 24-48 h post-exercise. RT leading to failure considerably increases the time needed for the recovery of neuromuscular function and metabolic and hormonal homeostasis. Avoiding failure would allow athletes to be in a better neuromuscular condition to undertake a new training session or competition in a shorter period of time.

  12. Critical spare parts ordering decisions using conditional reliability and stochastic lead time

    International Nuclear Information System (INIS)

    Godoy, David R.; Pascual, Rodrigo; Knights, Peter

    2013-01-01

    Asset-intensive companies face great pressure to reduce operation costs and increase utilization. This scenario often leads to over-stress on critical equipment and its spare parts associated, affecting availability, reliability, and system performance. As these resources impact considerably on financial and operational structures, the opportunity is given by demand for decision-making methods for the management of spare parts processes. We proposed an ordering decision-aid technique which uses a measurement of spare performance, based on the stress–strength interference theory; which we have called Condition-Based Service Level (CBSL). We focus on Condition Managed Critical Spares (CMS), namely, spares which are expensive, highly reliable, with higher lead times, and are not available in store. As a mitigation measure, CMS are under condition monitoring. The aim of the paper is orienting the decision time for CMS ordering or just continuing the operation. The paper presents a graphic technique which considers a rule for decision based on both condition-based reliability function and a stochastic/fixed lead time. For the stochastic lead time case, results show that technique is effective to determine the time when the system operation is reliable and can withstand the lead time variability, satisfying a desired service level. Additionally, for the constant lead time case, the technique helps to define insurance spares. In conclusion, presented ordering decision rule is useful to asset managers for enhancing the operational continuity affected by spare parts

  13. It is time to abandon "expected bladder capacity." Systematic review and new models for children's normal maximum voided volumes.

    Science.gov (United States)

    Martínez-García, Roberto; Ubeda-Sansano, Maria Isabel; Díez-Domingo, Javier; Pérez-Hoyos, Santiago; Gil-Salom, Manuel

    2014-09-01

    There is an agreement to use simple formulae (expected bladder capacity and other age based linear formulae) as bladder capacity benchmark. But real normal child's bladder capacity is unknown. To offer a systematic review of children's normal bladder capacity, to measure children's normal maximum voided volumes (MVVs), to construct models of MVVs and to compare them with the usual formulae. Computerized, manual and grey literature were reviewed until February 2013. Epidemiological, observational, transversal, multicenter study. A consecutive sample of healthy children aged 5-14 years, attending Primary Care centres with no urologic abnormality were selected. Participants filled-in a 3-day frequency-volume chart. Variables were MVVs: maximum of 24 hr, nocturnal, and daytime maximum voided volumes. diuresis and its daytime and nighttime fractions; body-measure data; and gender. The consecutive steps method was used in a multivariate regression model. Twelve articles accomplished systematic review's criteria. Five hundred and fourteen cases were analysed. Three models, one for each of the MVVs, were built. All of them were better adjusted to exponential equations. Diuresis (not age) was the most significant factor. There was poor agreement between MVVs and usual formulae. Nocturnal and daytime maximum voided volumes depend on several factors and are different. Nocturnal and daytime maximum voided volumes should be used with different meanings in clinical setting. Diuresis is the main factor for bladder capacity. This is the first model for benchmarking normal MVVs with diuresis as its main factor. Current formulae are not suitable for clinical use. © 2013 Wiley Periodicals, Inc.

  14. Radiogenic Lead Isotopes and Time Stratigraphy in the Hudson River, New York

    International Nuclear Information System (INIS)

    Chillrud, Steven N.; Bopp, Richard F.; Ross, James M.; Chaky, Damon A.; Hemming, Sidney; Shuster, Edward L.; Simpson, H. James; Estabrooks, Frank

    2004-01-01

    Radionuclide, radiogenic lead isotope and trace metal analyses on fine-grained sediment cores collected along 160 km of the upper and tidal Hudson River were used to examine temporal trends of contaminant loadings and to develop radiogenic lead isotopes both as a stratigraphic tool and as tracers for resolving decadal particle transport fluxes. Very large inputs of Cd, Sb, Pb, and Cr are evident in the sediment record, potentially from a single manufacturing facility. The total range in radiogenic lead isotope ratios observed in well-dated cores collected about 24 km downstream of the plant is large (e.g., maximum difference in 206 Pb/ 207 Pb is 10%), characterized by four major shifts occurring in the 1950s, 1960s, 1970s and 1980s. The upper Hudson signals in Cd and radiogenic lead isotopes were still evident in sediments collected 160 km downstream in the tidal Hudson. The large magnitude and abrupt shifts in radiogenic lead isotope ratios as a function of depth provide sensitive temporal constraints that complement information derived from radionuclide analyses to significantly improve the precision of dating assignments. Application of a simple dilution model to data from paired cores suggests much larger sediment inputs in one section of the river than previously reported, suggesting particle influxes to the Hudson have been underestimated

  15. Timing of glacier advances and climate in the High Tatra Mountains (Western Carpathians) during the Last Glacial Maximum

    Science.gov (United States)

    Makos, Michał; Dzierżek, Jan; Nitychoruk, Jerzy; Zreda, Marek

    2014-07-01

    During the Last Glacial Maximum (LGM), long valley glaciers developed on the northern and southern sides of the High Tatra Mountains, Poland and Slovakia. Chlorine-36 exposure dating of moraine boulders suggests two major phases of moraine stabilization, at 26-21 ka (LGM I - maximum) and at 18 ka (LGM II). The dates suggest a significantly earlier maximum advance on the southern side of the range. Reconstructing the geometry of four glaciers in the Sucha Woda, Pańszczyca, Mlynicka and Velicka valleys allowed determining their equilibrium-line altitudes (ELAs) at 1460, 1460, 1650 and 1700 m asl, respectively. Based on a positive degree-day model, the mass balance and climatic parameter anomaly (temperature and precipitation) has been constrained for LGM I advance. Modeling results indicate slightly different conditions between northern and southern slopes. The N-S ELA gradient finds confirmation in slightly higher temperature (at least 1 °C) or lower precipitation (15%) on the south-facing glaciers during LGM I. The precipitation distribution over the High Tatra Mountains indicates potentially different LGM atmospheric circulation than at the present day, with reduced northwesterly inflow and increased southerly and westerly inflows of moist air masses.

  16. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  17. The influence of time on lead toxicity and bioaccumulation determined by the OECD earthworm toxicity test

    International Nuclear Information System (INIS)

    Davies, N.A.Nicola A.; Hodson, M.E.Mark E.; Black, S.Stuart

    2003-01-01

    Timing of lead addition and worms to soil affects the response of the worms to soil affects the response of the worms to lead. - Internationally agreed standard protocols for assessing chemical toxicity of contaminants in soil to worms assume that the test soil does not need to equilibrate with the chemical to be tested prior to the addition of the test organisms and that the chemical will exert any toxic effect upon the test organism within 28 days. Three experiments were carried out to investigate these assumptions. The first experiment was a standard toxicity test where lead nitrate was added to a soil in solution to give a range of concentrations. The mortality of the worms and the concentration of lead in the survivors were determined. The LC 50 s for 14 and 28 days were 5311 and 5395 μg Pb g -1 soil respectively. The second experiment was a timed lead accumulation study with worms cultivated in soil containing either 3000 or 5000 μg Pb g -1 soil . The concentration of lead in the worms was determined at various sampling times. Uptake at both concentrations was linear with time. Worms in the 5000 μg g -1 soil accumulated lead at a faster rate (3.16 μg Pb g -1 tissue day -1 ) than those in the 3000 μg g -1 soil (2.21 μg Pb g -1 tissue day -1 ). The third experiment was a timed experiment with worms cultivated in soil containing 7000 μg Pb g -1 soil . Soil and lead nitrate solution were mixed and stored at 20 deg. C. Worms were added at various times over a 35-day period. The time to death increased from 23 h, when worms were added directly after the lead was added to the soil, to 67 h when worms were added after the soil had equilibrated with the lead for 35 days. In artificially Pb-amended soils the worms accumulate Pb over the duration of their exposure to the Pb. Thus time limited toxicity tests may be terminated before worm body load has reached a toxic level. This could result in under-estimates of the toxicity of Pb to worms. As the equilibration

  18. Effects of the Maximum Luminance in a Medical-grade Liquid-crystal Display on the Recognition Time of a Test Pattern: Observer Performance Using Landolt Rings.

    Science.gov (United States)

    Doi, Yasuhiro; Matsuyama, Michinobu; Ikeda, Ryuji; Hashida, Masahiro

    2016-07-01

    This study was conducted to measure the recognition time of the test pattern and to investigate the effects of the maximum luminance in a medical-grade liquid-crystal display (LCD) on the recognition time. Landolt rings as signals of the test pattern were used with four random orientations, one on each of the eight gray-scale steps. Ten observers input the orientation of the gap on the Landolt rings using cursor keys on the keyboard. The recognition times were automatically measured from the display of the test pattern on the medical-grade LCD to the input of the orientation of the gap in the Landolt rings. The maximum luminance in this study was set to one of four values (100, 170, 250, and 400 cd/m(2)), for which the corresponding recognition times were measured. As a result, the average recognition times for each observer with maximum luminances of 100, 170, 250, and 400 cd/m(2) were found to be 3.96 to 7.12 s, 3.72 to 6.35 s, 3.53 to 5.97 s, and 3.37 to 5.98 s, respectively. The results indicate that the observer's recognition time is directly proportional to the luminance of the medical-grade LCD. Therefore, it is evident that the maximum luminance of the medical-grade LCD affects the test pattern recognition time.

  19. A Time-Walk Correction Method for PET Detectors Based on Leading Edge Discriminators.

    Science.gov (United States)

    Du, Junwei; Schmall, Jeffrey P; Judenhofer, Martin S; Di, Kun; Yang, Yongfeng; Cherry, Simon R

    2017-09-01

    The leading edge timing pick-off technique is the simplest timing extraction method for PET detectors. Due to the inherent time-walk of the leading edge technique, corrections should be made to improve timing resolution, especially for time-of-flight PET. Time-walk correction can be done by utilizing the relationship between the threshold crossing time and the event energy on an event by event basis. In this paper, a time-walk correction method is proposed and evaluated using timing information from two identical detectors both using leading edge discriminators. This differs from other techniques that use an external dedicated reference detector, such as a fast PMT-based detector using constant fraction techniques to pick-off timing information. In our proposed method, one detector was used as reference detector to correct the time-walk of the other detector. Time-walk in the reference detector was minimized by using events within a small energy window (508.5 - 513.5 keV). To validate this method, a coincidence detector pair was assembled using two SensL MicroFB SiPMs and two 2.5 mm × 2.5 mm × 20 mm polished LYSO crystals. Coincidence timing resolutions using different time pick-off techniques were obtained at a bias voltage of 27.5 V and a fixed temperature of 20 °C. The coincidence timing resolution without time-walk correction were 389.0 ± 12.0 ps (425 -650 keV energy window) and 670.2 ± 16.2 ps (250-750 keV energy window). The timing resolution with time-walk correction improved to 367.3 ± 0.5 ps (425 - 650 keV) and 413.7 ± 0.9 ps (250 - 750 keV). For comparison, timing resolutions were 442.8 ± 12.8 ps (425 - 650 keV) and 476.0 ± 13.0 ps (250 - 750 keV) using constant fraction techniques, and 367.3 ± 0.4 ps (425 - 650 keV) and 413.4 ± 0.9 ps (250 - 750 keV) using a reference detector based on the constant fraction technique. These results show that the proposed leading edge based time-walk correction method works well. Timing resolution obtained

  20. Decision of Lead-Time Compression and Stable Operation of Supply Chain

    Directory of Open Access Journals (Sweden)

    Songtao Zhang

    2017-01-01

    Full Text Available A cost optimization strategy and a robust control strategy were studied to realize the low-cost robust operation of the supply chain with lead times. Firstly, for the multiple production lead times which existed in the supply chain, a corresponding inventory state model and a supply chain cost model were constructed based on the Takagi-Sugeno fuzzy control system. Then, by considering the actual inventory level, the lead-time compression cost, and the stock-out cost, a cost optimization strategy was proposed. Furthermore, a fuzzy robust control strategy was proposed to realize the flexible switching among the models. Finally, the simulation results show that the total cost of the supply chain could be reduced effectively by the cost optimization strategy, and the stable operation of the supply chain could be realized by the proposed fuzzy robust control strategy.

  1. Half Double Methodology – Leading projects to impact in half the time with double the impact

    DEFF Research Database (Denmark)

    Ehlers, Michael; Svejvig, Per

    this presentation to learn about the three core elements of the half-double methodology so that you can lead your projects to double the impact in half the time. Objectives: Summarize the half-double methodology and core elements of impact, flow, and leadership; explain how PMI’s project management tools can......Despite developments in agile methodologies over the last 20 years, the potential for optimization in projects is still significant. Current research shows that there are methodologies that can be used to reduce lead time and increase value creation of projects by 30% or more. Join...

  2. Overestimated lead times in cancer screening has led to substantial underestimation of overdiagnosis

    DEFF Research Database (Denmark)

    Zahl, P-H; Juhl Jørgensen, Karsten; Gøtzsche, P C

    2013-01-01

    Published lead time estimates in breast cancer screening vary from 1 to 7 years and the percentages of overdiagnosis vary from 0 to 75%. The differences are usually explained as random variations. We study how much can be explained by using different definitions and methods.......Published lead time estimates in breast cancer screening vary from 1 to 7 years and the percentages of overdiagnosis vary from 0 to 75%. The differences are usually explained as random variations. We study how much can be explained by using different definitions and methods....

  3. Multilevel Production Systems with Dependent Demand with Uncertainty of Lead Times

    Directory of Open Access Journals (Sweden)

    Haibatolah Sadeghi

    2016-01-01

    Full Text Available This study considers a multilevel assembly system with several components in each sublevel. It is assumed that actual lead time for all components is probabilistic; and periodic order quantity (POQ policy for ordering is utilized. If at a certain level a job is not received at the expected time, a delay is incurred at the delivery of production at this level and this may result in backorders of the finished product. It is assumed in this case that a fixed percentage of the shortage is backlogged and other sales are lost. In the real situation, some but not all customers will wait for backlogged components during a period of shortage, such as for fashionable commodities or high-tech products with the short product life cycle. The objective of this study is to find the planned lead time and periodicity for the total components in order to minimize the expected fixed ordering, holding, and partial backlogging costs for the finished product. In this study, it is assumed that a percentage of components at each level are scrap. A general mathematical model is suggested and the method developed can be used for optimization planned lead time and periodicity for such an MRP system under lead time uncertainties.

  4. Reconstructing the life-time lead exposure in children using dentine in deciduous teeth

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, Thomas J., E-mail: shepherdtj@aol.com [School of Earth and Environment, University of Leeds, Leeds LS2 9JT (United Kingdom); Dirks, Wendy [Centre for Oral Health Research, School of Dental Sciences, Newcastle University, Newcastle upon Tyne NE2 4BW (United Kingdom); Manmee, Charuwan; Hodgson, Susan [Institute of Health and Society, Newcastle University, Newcastle upon Tyne NE2 4AX (United Kingdom); Banks, David A. [School of Earth and Environment, University of Leeds, Leeds LS2 9JT (United Kingdom); Averley, Paul [Centre for Oral Health Research, School of Dental Sciences, Newcastle University, Newcastle upon Tyne NE2 4BW (United Kingdom); Queensway Dental Practice, 170 Queensway, Billingham, Teesside TS23 2NT (United Kingdom); Pless-Mulloli, Tanja [Institute of Health and Society, Newcastle University, Newcastle upon Tyne NE2 4AX (United Kingdom); Newcastle Institute for Research on Sustainability, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom)

    2012-05-15

    Data are presented to demonstrate that the circumpulpal dentine of deciduous teeth can be used to reconstruct a detailed record of childhood exposure to lead. By combining high spatial resolution laser ablation ICP-MS with dental histology, information was acquired on the concentration of lead in dentine from in utero to several years after birth, using a true time template of dentine growth. Time corrected lead analyses for pairs of deciduous molars confirmed that between-tooth variation for the same child was negligible and that meaningful exposure histories can be obtained from a single, multi-point ablation transect on longitudinal sections of individual teeth. For a laser beam of 100 {mu}m diameter, the lead signal for each ablation point represented a time span of 42 days. Simultaneous analyses for Sr, Zn and Mg suggest that the incorporation of Pb into dentine (carbonated apatite) is most likely controlled by nanocrystal growth mechanisms. The study also highlights the importance of discriminating between primary and secondary dentine and the dangers of translating lead analyses into blood lead estimates without determining the age or duration of dentine sampled. Further work is in progress to validate deciduous teeth as blood lead biomarkers. - Highlights: Black-Right-Pointing-Pointer Reconstruction of childhood exposure history to Pb using deciduous tooth dentine. Black-Right-Pointing-Pointer Pb analyses acquired for dentine growth increments of 42 days. Black-Right-Pointing-Pointer Highly correlated Pb concentration profiles for pairs of deciduous molars. Black-Right-Pointing-Pointer Data for Sr, Zn and Mg provide a model for the incorporation of Pb into dentine.

  5. Reconstructing the life-time lead exposure in children using dentine in deciduous teeth

    International Nuclear Information System (INIS)

    Shepherd, Thomas J.; Dirks, Wendy; Manmee, Charuwan; Hodgson, Susan; Banks, David A.; Averley, Paul; Pless-Mulloli, Tanja

    2012-01-01

    Data are presented to demonstrate that the circumpulpal dentine of deciduous teeth can be used to reconstruct a detailed record of childhood exposure to lead. By combining high spatial resolution laser ablation ICP-MS with dental histology, information was acquired on the concentration of lead in dentine from in utero to several years after birth, using a true time template of dentine growth. Time corrected lead analyses for pairs of deciduous molars confirmed that between-tooth variation for the same child was negligible and that meaningful exposure histories can be obtained from a single, multi-point ablation transect on longitudinal sections of individual teeth. For a laser beam of 100 μm diameter, the lead signal for each ablation point represented a time span of 42 days. Simultaneous analyses for Sr, Zn and Mg suggest that the incorporation of Pb into dentine (carbonated apatite) is most likely controlled by nanocrystal growth mechanisms. The study also highlights the importance of discriminating between primary and secondary dentine and the dangers of translating lead analyses into blood lead estimates without determining the age or duration of dentine sampled. Further work is in progress to validate deciduous teeth as blood lead biomarkers. - Highlights: ► Reconstruction of childhood exposure history to Pb using deciduous tooth dentine. ► Pb analyses acquired for dentine growth increments of 42 days. ► Highly correlated Pb concentration profiles for pairs of deciduous molars. ► Data for Sr, Zn and Mg provide a model for the incorporation of Pb into dentine.

  6. Timing of maximum glacial extent and deglaciation from HualcaHualca volcano (southern Peru), obtained with cosmogenic 36Cl.

    Science.gov (United States)

    Alcalá, Jesus; Palacios, David; Vazquez, Lorenzo; Juan Zamorano, Jose

    2015-04-01

    Andean glacial deposits are key records of climate fluctuations in the southern hemisphere. During the last decades, in situ cosmogenic nuclides have provided fresh and significant dates to determine past glacier behavior in this region. But still there are many important discrepancies such as the impact of Last Glacial Maximum or the influence of Late Glacial climatic events on glacial mass balances. Furthermore, glacial chronologies from many sites are still missing, such as HualcaHualca (15° 43' S; 71° 52' W; 6,025 masl), a high volcano of the Peruvian Andes located 70 km northwest of Arequipa. The goal of this study is to establish the age of the Maximum Glacier Extent (MGE) and deglaciation at HualcaHualca volcano. To achieve this objetive, we focused in four valleys (Huayuray, Pujro Huayjo, Mollebaya and Mucurca) characterized by a well-preserved sequence of moraines and roches moutonnées. The method is based on geomorphological analysis supported by cosmogenic 36Cl surface exposure dating. 36Cl ages have been estimated with the CHLOE calculator and were compared with other central Andean glacial chronologies as well as paleoclimatological proxies. In Huayuray valley, exposure ages indicates that MGE occurred ~ 18 - 16 ka. Later, the ice mass gradually retreated but this process was interrupted by at least two readvances; the last one has been dated at ~ 12 ka. In the other hand, 36Cl result reflects a MGE age of ~ 13 ka in Mollebaya valley. Also, two samples obtained in Pujro-Huayjo and Mucurca valleys associated with MGE have an exposure age of 10-9 ka, but likely are moraine boulders affected by exhumation or erosion processes. Deglaciation in HualcaHualca volcano began abruptly ~ 11.5 ka ago according to a 36Cl age from a polished and striated bedrock in Pujro Huayjo valley, presumably as a result of reduced precipitation as well as a global increase of temperatures. The glacier evolution at HualcaHualca volcano presents a high correlation with

  7. Determination of new time-temperature-transformation diagrams for lead-calcium alloys

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, F.; Lambertin, M. [Arts et Metiers Paristech, LaBoMaP, ENSAM, Rue porte de Paris, 71250 Cluny (France); Delfaut-Durut, L. [CEA, centre de Valduc [SEMP, LECM], 21120 Is-sur-Tille (France); Maitre, A. [SPCTS, UFR Sciences et techniques, 87060 Limoges (France); Vilasi, M. [LCSM, Universite Nancy I, 54506 Vandoeuvre les Nancy (France)

    2008-12-01

    The Pb-Ca is an age hardening alloy that allows for an increase in the hardness compared to pure lead. The hardening is obtained after different successive ageing transformations. In addition, this hardening is followed by an overageing which induces a softening. The ageing and overageing transformation mechanisms are now well identified in lead-calcium alloys. In this paper, we propose to represent the domain of stability of each transformation via time-temperature-transformation diagrams for a calcium concentration from 600 to 1280 ppm and in a range of temperatures from -20 to 180 C. These diagrams are constructed with the data obtained by in situ ageing with metallographic observations, hardness and electrical resistance measurements. The specificities of lead-calcium such as its fast ageing at ambient temperature and its overageing over time required the design of specific devices to be able to identify the characteristics of these alloys. (author)

  8. The effects of lead time and visual aids in TTO valuation: a study of the EQ-VT framework

    NARCIS (Netherlands)

    N. Luo (Nan); M. Li (Minghui); E.A. Stolk (Elly); N. Devlin (Nancy)

    2013-01-01

    markdownabstract__Abstract__ __Background__ The effect of lead time in time trade-off (TTO) valuation is not well understood. The purpose of this study was to investigate the effects on health-state valuation of the length of lead time and the way the lead-time TTO task is displayed visually.

  9. Safety stock or safety lead time : coping with unreliability in demand and supply

    NARCIS (Netherlands)

    van Kampen, T.J.; van Donk, D.P.; van der Zee, D.J.

    2010-01-01

    Safety stock and safety lead time are common measures used to cope with uncertainties in demand and supply. Typically, these uncertainties are studied in isolated instances, ignoring settings with uncertainties both in demand and in supply. The current literature largely neglects case study based

  10. Production planning of a perishable product with lead time and non-stationary demand

    NARCIS (Netherlands)

    Pauls-Worm, K.G.J.; Haijema, R.; Hendrix, E.M.T.; Rossi, R.; Vorst, van der J.G.A.J.

    2012-01-01

    We study a production planning problem for a perishable product with a fixed lifetime, under a service-level constraint. The product has a non-stationary stochastic demand. Food supply chains of fresh products like cheese and several crop products, are characterised by long lead times due to

  11. Inventory control in multi-echelon divergent systems with random lead times

    NARCIS (Netherlands)

    Heijden, van der M.C.; Diks, E.B.; Kok, de A.G.

    1996-01-01

    This paper deals with integral inventory control in multi-echelon divergent systems with stochastic lead times. The policy considered is an echelon stock, periodic review, order-up-to (R, S) policy. A computational method is derived to obtain the order-up-to level and the allocation fractions

  12. Use of Six Sigma Methodology to Reduce Appointment Lead-Time in Obstetrics Outpatient Department.

    Science.gov (United States)

    Ortiz Barrios, Miguel A; Felizzola Jiménez, Heriberto

    2016-10-01

    This paper focuses on the issue of longer appointment lead-time in the obstetrics outpatient department of a maternal-child hospital in Colombia. Because of extended appointment lead-time, women with high-risk pregnancy could develop severe complications in their health status and put their babies at risk. This problem was detected through a project selection process explained in this article and to solve it, Six Sigma methodology has been used. First, the process was defined through a SIPOC diagram to identify its input and output variables. Second, six sigma performance indicators were calculated to establish the process baseline. Then, a fishbone diagram was used to determine the possible causes of the problem. These causes were validated with the aid of correlation analysis and other statistical tools. Later, improvement strategies were designed to reduce appointment lead-time in this department. Project results evidenced that average appointment lead-time reduced from 6,89 days to 4,08 days and the deviation standard dropped from 1,57 days to 1,24 days. In this way, the hospital will serve pregnant women faster, which represents a risk reduction of perinatal and maternal mortality.

  13. The Multi-Location Transshipment Problem with Positive Replenishment Lead Times

    NARCIS (Netherlands)

    Y. Gong (Yeming); E. Yucesan

    2006-01-01

    textabstractTransshipments, monitored movements of material at the same echelon of a supply chain, represent an effective pooling mechanism. With a single exception, research on transshipments overlooks replenishment lead times. The only approach for two-location inventory systems with

  14. Fuzzy Control Model and Simulation for Nonlinear Supply Chain System with Lead Times

    Directory of Open Access Journals (Sweden)

    Songtao Zhang

    2017-01-01

    Full Text Available A new fuzzy robust control strategy for the nonlinear supply chain system in the presence of lead times is proposed. Based on Takagi-Sugeno fuzzy control system, the fuzzy control model of the nonlinear supply chain system with lead times is constructed. Additionally, we design a fuzzy robust H∞ control strategy taking the definition of maximal overlapped-rules group into consideration to restrain the impacts such as those caused by lead times, switching actions among submodels, and customers’ stochastic demands. This control strategy can not only guarantee that the nonlinear supply chain system is robustly asymptotically stable but also realize soft switching among subsystems of the nonlinear supply chain to make the less fluctuation of the system variables by introducing the membership function of fuzzy system. The comparisons between the proposed fuzzy robust H∞ control strategy and the robust H∞ control strategy are finally illustrated through numerical simulations on a two-stage nonlinear supply chain with lead times.

  15. Integration of capacity, pricing, and lead-time decisions in a decentralized supply chain

    NARCIS (Netherlands)

    Zhu, Stuart X.

    We consider a decentralized supply chain consisting of a supplier and a retailer facing price- and lead-time-sensitive demand. The decision process is modelled by a Stackelberg game where the supplier, as a leader, determines the capacity and the wholesale price, and the retailer, as a follower,

  16. Simultaneous calibration of ensemble river flow predictions over an entire range of lead times

    Science.gov (United States)

    Hemri, S.; Fundel, F.; Zappa, M.

    2013-10-01

    Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.

  17. Inventory control in multi-echelon divergent systems with random lead times

    NARCIS (Netherlands)

    van der Heijden, Matthijs C.; Diks, Erik; de Kok, Ton

    1999-01-01

    This paper deals with integral inventory control in multi-echelon divergent systems with stochastic lead times. The policy considered is an echelon stock, periodic review, order-up-to (R, S) policy. A computational method is derived to obtain the order-up-to level and the allocation fractions

  18. An Integrated Multiechelon Logistics Model with Uncertain Delivery Lead Time and Quality Unreliability

    Directory of Open Access Journals (Sweden)

    Ming-Feng Yang

    2016-01-01

    Full Text Available Nowadays, in order to achieve advantages in supply chain management, how to keep inventory in adequate level and how to enhance customer service level are two critical practices for decision makers. Generally, uncertain lead time and defective products have much to do with inventory and service level. Therefore, this study mainly aims at developing a multiechelon integrated just-in-time inventory model with uncertain lead time and imperfect quality to enhance the benefits of the logistics model. In addition, the Ant Colony Algorithm (ACA is established to determine the optimal solutions. Moreover, based on our proposed model and analysis, the ACA is more efficient than Particle Swarm Optimization (PSO and Lingo in SMEIJI model. An example is provided in this study to illustrate how production run and defective rate have an effect on system costs. Finally, the results of our research could provide some managerial insights which support decision makers in real-world operations.

  19. An inventory model of purchase quantity for fully-loaded vehicles with maximum trips in consecutive transport time

    Directory of Open Access Journals (Sweden)

    Chen Pоуu

    2013-01-01

    Full Text Available Products made overseas but sold in Taiwan are very common. Regarding the cross-border or interregional production and marketing of goods, inventory decision-makers often have to think about how to determine the amount of purchases per cycle, the number of transport vehicles, the working hours of each transport vehicle, and the delivery by ground or air transport to sales offices in order to minimize the total cost of the inventory in unit time. This model assumes that the amount of purchases for each order cycle should allow all rented vehicles to be fully loaded and the transport times to reach the upper limit within the time period. The main research findings of this study included the search for the optimal solution of the integer planning of the model and the results of sensitivity analysis.

  20. Modeling and analysis for determining optimal suppliers under stochastic lead times

    DEFF Research Database (Denmark)

    Abginehchi, Soheil; Farahani, Reza Zanjirani

    2010-01-01

    systems. The item acquisition lead times of suppliers are random variables. Backorder is allowed and shortage cost is charged based on not only per unit in shortage but also per time unit. Continuous review (s,Q) policy has been assumed. When the inventory level depletes to a reorder level, the total...... order is split among n suppliers. Since the suppliers have different characteristics, the quantity ordered to different suppliers may be different. The problem is to determine the reorder level and quantity ordered to each supplier so that the expected total cost per time unit, including ordering cost...

  1. K time & maximum amplitude of thromboelastogram predict post-central venous cannulation bleeding in patients with cirrhosis: A pilot study

    Directory of Open Access Journals (Sweden)

    Chandra K Pandey

    2017-01-01

    Interpretation & conclusions: Our results show that the cut-off value for INR ≥2.6 and K time ≥3.05 min predict bleeding and MA ≥48.8 mm predicts non-bleeding in patients with cirrhosis undergoing central venous pressure catheter cannulation.

  2. Estimates of over-diagnosis of breast cancer due to population-based mammography screening in South Australia after adjustment for lead time effects.

    Science.gov (United States)

    Beckmann, Kerri; Duffy, Stephen W; Lynch, John; Hiller, Janet; Farshid, Gelareh; Roder, David

    2015-09-01

    To estimate over-diagnosis due to population-based mammography screening using a lead time adjustment approach, with lead time measures based on symptomatic cancers only. Women aged 40-84 in 1989-2009 in South Australia eligible for mammography screening. Numbers of observed and expected breast cancer cases were compared, after adjustment for lead time. Lead time effects were modelled using age-specific estimates of lead time (derived from interval cancer rates and predicted background incidence, using maximum likelihood methods) and screening sensitivity, projected background breast cancer incidence rates (in the absence of screening), and proportions screened, by age and calendar year. Lead time estimates were 12, 26, 43 and 53 months, for women aged 40-49, 50-59, 60-69 and 70-79 respectively. Background incidence rates were estimated to have increased by 0.9% and 1.2% per year for invasive and all breast cancer. Over-diagnosis among women aged 40-84 was estimated at 7.9% (0.1-12.0%) for invasive cases and 12.0% (5.7-15.4%) when including ductal carcinoma in-situ (DCIS). We estimated 8% over-diagnosis for invasive breast cancer and 12% inclusive of DCIS cancers due to mammography screening among women aged 40-84. These estimates may overstate the extent of over-diagnosis if the increasing prevalence of breast cancer risk factors has led to higher background incidence than projected. © The Author(s) 2015.

  3. Quasi-Maximum Likelihood Estimation and Bootstrap Inference in Fractional Time Series Models with Heteroskedasticity of Unknown Form

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert

    We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...

  4. Stochastic integrated vendor–buyer model with unstable lead time and setup cost

    Directory of Open Access Journals (Sweden)

    Chandra K. Jaggi

    2011-01-01

    Full Text Available This paper presents a new vendor-buyer system where there are different objectives for both sides. The proposed method of this paper is different from the other previously published works since it considers different objectives for both sides. In this paper, the vendor’s emphasis is on the crashing of the setup cost, which not only helps him compete in the market but also provides better services to his customers; and the buyer’s aim is to reduce the lead time, which not only facilitates the buyer to fulfill the customers’ demand on time but also enables him to earn a good reputation in the market or vice versa. In the light of the above stated facts, an integrated vendor-buyer stochastic inventory model is also developed. The propsed model considers two cases for demand during lead time: Case (i Complete demand information, Case (ii Partial demand information. The proposed model jointly optimizes the buyer’s ordered quantity and lead time along with vendor’s setup cost and the number of shipments. The results are demonstrated with the help of numerical examples.

  5. Nuclear reactors' construction costs: The role of lead-time, standardization and technological progress

    International Nuclear Information System (INIS)

    Berthelemy, Michel; Escobar Rangel, Lina

    2013-01-01

    This paper provides the first comparative analysis of nuclear reactor construction costs in France and the United States. Studying the cost of nuclear power has often been a challenge, owing to the lack of reliable data sources and heterogeneity between countries, as well as the long time horizon which requires controlling for input prices and structural changes. We build a simultaneous system of equations for overnight costs and construction time (lead-time) to control for endogeneity, using expected demand variation as an instrument. We argue that benefits from nuclear reactor program standardization can arise through short term coordination gains, when the diversity of nuclear reactors' technologies under construction is low, or through long term benefits from learning spillovers from past reactor construction experience, if those spillovers are limited to similar reactors. We find that overnight construction costs benefit directly from learning spillovers but that these spillovers are only significant for nuclear models built by the same Architect-Engineer (A- E). In addition, we show that the standardization of nuclear reactors under construction has an indirect and positive effect on construction costs through a reduction in lead-time, the latter being one of the main drivers of construction costs. Conversely, we also explore the possibility of learning by searching and find that, contrary to other energy technologies, innovation leads to construction costs increases. (authors)

  6. Optimal base-stock policy for the inventory system with periodic review, backorders and sequential lead times

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Thorstenson, Anders

    2008-01-01

    We extend well-known formulae for the optimal base stock of the inventory system with continuous review and constant lead time to the case with periodic review and stochastic, sequential lead times. Our extension uses the notion of the 'extended lead time'. The derived performance measures...

  7. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    Science.gov (United States)

    Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  8. Verification of short lead time forecast models: applied to Kp and Dst forecasting

    Science.gov (United States)

    Wintoft, Peter; Wik, Magnus

    2016-04-01

    In the ongoing EU/H2020 project PROGRESS models that predicts Kp, Dst, and AE from L1 solar wind data will be used as inputs to radiation belt models. The possible lead times from L1 measurements are shorter (10s of minutes to hours) than the typical duration of the physical phenomena that should be forecast. Under these circumstances several metrics fail to single out trivial cases, such as persistence. In this work we explore metrics and approaches for short lead time forecasts. We apply these to current Kp and Dst forecast models. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 637302.

  9. Supply Chain Model with Stochastic Lead Time, Trade-Credit Financing, and Transportation Discounts

    Directory of Open Access Journals (Sweden)

    Sung Jun Kim

    2017-01-01

    Full Text Available This model extends a two-echelon supply chain model by considering the trade-credit policy, transportations discount to make a coordination mechanism between transportation discounts, trade-credit financing, number of shipments, quality improvement of products, and reduced setup cost in such a way that the total cost of the whole system can be reduced, where the supplier offers trade-credit-period to the buyer. For buyer, the backorder rate is considered as variable. There are two investments to reduce setup cost and to improve quality of products. The model assumes lead time-dependent backorder rate, where the lead time is stochastic in nature. By using the trade-credit policy, the model gives how the credit-period would be determined to achieve the win-win outcome. An iterative algorithm is designed to obtain the global optimum results. Numerical example and sensitivity analysis are given to illustrate the model.

  10. Nuclear reactors' construction costs: The role of lead-time, standardization and technological progress

    International Nuclear Information System (INIS)

    Berthélemy, Michel; Escobar Rangel, Lina

    2015-01-01

    This paper provides an econometric analysis of nuclear reactor construction costs in France and the United States based on overnight costs data. We build a simultaneous system of equations for overnight costs and construction time (lead-time) to control for endogeneity, using change in expected electricity demand as instrument. We argue that the construction of nuclear reactors can benefit from standardization gains through two channels. First, short term coordination benefits can arise when the diversity of nuclear reactors' designs under construction is low. Second, long term benefits can occur due to learning spillovers from past constructions of similar reactors. We find that construction costs benefit directly from learning spillovers but that these spillovers are only significant for nuclear models built by the same Architect–Engineer. In addition, we show that the standardization of nuclear reactors under construction has an indirect and positive effect on construction costs through a reduction in lead-time, the latter being one of the main drivers of construction costs. Conversely, we also explore the possibility of learning by searching and find that, contrary to other energy technologies, innovation leads to construction costs increases. -- Highlights: •This paper analyses the determinants of nuclear reactors construction costs and lead-time. •We study short term (coordination gains) and long term (learning by doing) benefits of standardization in France and the US. •Results show that standardization of nuclear programs is a key factor for reducing construction costs. •We also suggest that technological progress has contributed to construction costs escalation

  11. Scheduling rules to achieve lead-time targets in outpatient appointment systems

    OpenAIRE

    Sivakumar, Appa Iyer; Nguyen, Thu Ba Thi; Graves, Stephen C

    2015-01-01

    This paper considers how to schedule appointments for outpatients, for a clinic that is subject to appointment lead-time targets for both new and returning patients. We develop heuristic rules, which are the exact and relaxed appointment scheduling rules, to schedule each new patient appointment (only) in light of uncertainty about future arrivals. The scheduling rules entail two decisions. First, the rules need to determine whether or not a patient's request can be accepted; then, if the req...

  12. Reducing of Manufacturing Lead Time by Implementation of Lean Manufacturing Principles

    Directory of Open Access Journals (Sweden)

    Hussein Salem Ketan

    2015-08-01

    Full Text Available Many organizations today are interesting to implementing lean manufacturing principles that should enable them to eliminating the wastes to reducing a manufacturing lead time. This paper concentrates on increasing the competitive level of the company in globalization markets and improving of the productivity by reducing the manufacturing lead time. This will be by using the main tool of lean manufacturing which is value stream mapping (VSM to identifying all the activities of manufacturing process (value and non-value added activities to reducing elimination of wastes (non-value added activities by converting a manufacturing system to pull instead of push by applying some of pull system strategies as kanban and first on first out lane (FIFO. ARENA software is used to simulate the current and future state. This work is executed in the state company for electrical industries in Baghdad. The obtained results of the application showed that implementation of lean principles helped on reducing of a manufacturing lead time by 33%.

  13. Strategic Inventory Positioning in BOM with Multiple Parents Using ASR Lead Time

    Directory of Open Access Journals (Sweden)

    Jingjing Jiang

    2016-01-01

    Full Text Available In order to meet the lead time that the customers require, work-in-process inventory (WIPI is necessary at almost every station in most make-to-order manufacturing. Depending on the station network configuration and lead time at each station, some of the WIPI do not contribute to reducing the manufacturing lead time of the final product at all. Therefore, it is important to identify the optimal set of stations to hold WIPI such that the total inventory holding cost is minimized, while the required due date for the final product is met. The authors have presented a model to determine the optimal position and quantity of WIPI for a given simple bill of material (S-BOM, in which any part in the BOM has only one immediate parent node. In this paper, we extend the previous study to the general BOM (G-BOM in which parts in the BOM can have more than one immediate parent and present a new solution procedure using genetic algorithm.

  14. Enhancing Nursing Staffing Forecasting With Safety Stock Over Lead Time Modeling.

    Science.gov (United States)

    McNair, Douglas S

    2015-01-01

    In balancing competing priorities, it is essential that nursing staffing provide enough nurses to safely and effectively care for the patients. Mathematical models to predict optimal "safety stocks" have been routine in supply chain management for many years but have up to now not been applied in nursing workforce management. There are various aspects that exhibit similarities between the 2 disciplines, such as an evolving demand forecast according to acuity and the fact that provisioning "stock" to meet demand in a future period has nonzero variable lead time. Under assumptions about the forecasts (eg, the demand process is well fit as an autoregressive process) and about the labor supply process (≥1 shifts' lead time), we show that safety stock over lead time for such systems is effectively equivalent to the corresponding well-studied problem for systems with stationary demand bounds and base stock policies. Hence, we can apply existing models from supply chain analytics to find the optimal safety levels of nurse staffing. We use a case study with real data to demonstrate that there are significant benefits from the inclusion of the forecast process when determining the optimal safety stocks.

  15. Study on residual discharge time of lead-acid battery based on fitting method

    Science.gov (United States)

    Liu, Bing; Yu, Wangwang; Jin, Yueqiang; Wang, Shuying

    2017-05-01

    This paper use the method of fitting to discuss the data of C problem of mathematical modeling in 2016, the residual discharge time model of lead-acid battery with 20A,30A,…,100A constant current discharge is obtained, and the discharge time model of discharge under arbitrary constant current is presented. The mean relative error of the model is calculated to be about 3%, which shows that the model has high accuracy. This model can provide a basis for optimizing the adaptation of power system to the electrical motor vehicle.

  16. The M/M/1 queue with inventory, lost sale and general lead times

    DEFF Research Database (Denmark)

    Saffari, Mohammad; Asmussen, Søren; Haji, Rasoul

    We consider an M/M/1 queueing system with inventory under the (r,Q) policy and with lost sales, in which demands occur according to a Poisson process and service times are exponentially distributed. All arriving customers during stockout are lost. We derive the stationary distributions of the joint...... queue length (number of customers in the system) and on-hand inventory when lead times are random variables and can take various distributions. The derived stationary distributions are used to formulate long-run average performance measures and cost functions in some numerical examples....

  17. The relationship between the parameters (Heart rate, Ejection fraction and BMI) and the maximum enhancement time of ascending aorta

    International Nuclear Information System (INIS)

    Jang, Young Ill; June, Woon Kwan; Dong, Kyeong Rae

    2007-01-01

    In this study, Bolus Tracking method was used to investigate the parameters affecting the time when contrast media is reached at 100 HU (T 100 ) and studied the relationship between parameters and T 100 because the time which is reached at aorta through antecubital vein after injecting contrast media is different from person to person. Using 64 MDCT, Cadiac CT, the data were obtained from 100 patients (male: 50, female: 50, age distribution: 21⁓81, average age: 57.5) during July and September, 2007 by injecting the contrast media at 4 ml∙sec -1 through their antecubital vein except having difficulties in stopping their breath and having arrhythmia. Using Somatom Sensation Cardiac 64 Siemens, patients’ height and weight were measured to know their mean Heart rate and BMI. Ejection Fraction was measured using Argus Program at Wizard Workstation. Variances of each parameter were analyzed depending on T 100 ’s variation with multiple comparison and the correlation of Heart rate, Ejection Fraction and BMI were analyzed, as well. According to T 100 ’s variation caused by Heart rate, Ejection Fraction and BMI variations, the higher patients’ Heart Rate and Ejection Fraction were, the faster T 100 ’s variations caused by Heart Rate and Ejection Fraction were. The lower their Heart Rate and Ejection Fraction were, the slower T 100 ’s variations were, but T 100 ’s variations caused by BMI were not affected. In the correlation between T 100 and parameters, Heart Rate (p⁄0.01) and Ejection Fraction (p⁄0.05) were significant, but BMI was not significant (p¤0.05). In the Heart Rate, Ejection Fraction and BMI depending on Fast (17 sec and less), Medium (18⁓21 sec), Slow (22 sec and over) Heart Rate was significant at Fast and Slow and Ejection Fraction was significant Fast and Slow as well as Medium and Slow (p⁄0.05), but BMI was not statistically significant. Of the parameters (Heart Rate, Ejection Fraction and BMI) which would affect T 100 , Heart

  18. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree

    Directory of Open Access Journals (Sweden)

    Kodner Robin B

    2010-10-01

    Full Text Available Abstract Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service.

  19. THE MAXIMUM AMOUNTS OF RAINFALL FALLEN IN SHORT PERIODS OF TIME IN THE HILLY AREA OF CLUJ COUNTY - GENESIS, DISTRIBUTION AND PROBABILITY OF OCCURRENCE

    Directory of Open Access Journals (Sweden)

    BLAGA IRINA

    2014-03-01

    Full Text Available The maximum amounts of rainfall are usually characterized by high intensity, and their effects on the substrate are revealed, at slope level, by the deepening of the existing forms of torrential erosion and also by the formation of new ones, and by landslide processes. For the 1971-2000 period, for the weather stations in the hilly area of Cluj County: Cluj- Napoca, Dej, Huedin and Turda the highest values of rainfall amounts fallen in 24, 48 and 72 hours were analyzed and extracted, based on which the variation and the spatial and temporal distribution of the precipitation were analyzed. The annual probability of exceedance of maximum rainfall amounts fallen in short time intervals (24, 48 and 72 hours, based on thresholds and class values was determined, using climatological practices and the Hyfran program facilities.

  20. The impact of product configurators on lead times in engineering-oriented companies

    DEFF Research Database (Denmark)

    Haug, Anders; Hvam, Lars; Mortensen, Niels Henrik

    2011-01-01

    This paper presents a study of how the use of product configurators affects business processes of engineering-oriented companies. A literature study shows that only a minor part of product configuration research deals with the effects of product configuration, and that the ones that do are mostly...... vague when reporting the effects of configurator projects. Only six cases were identified, which provide estimates of the actual size of lead time reduction achieved from product configurators. To broaden this knowledge, this paper presents the results of a study of 14 companies concerning the impact...

  1. Lead-acid batteries life time prolongation in renewable energy source plants

    Directory of Open Access Journals (Sweden)

    Костянтин Ігорович Ткаченко

    2015-11-01

    Full Text Available Charge controllers with microprocessor control are recognized to be almost optimal process control devices for collecting and storing energy in batteries in power systems with renewable energy sources such as solar photoelectric batteries, wind electrogenerators and others. The task of the controller is charging process control, that is such as charging and discharging the batteries while providing maximum charging speed and battery saving parameters that characterize the state of the battery, within certain limits, preventing overcharging, overheating and the batteries deep discharge. The possibility of archiving data that keeps the battery parameters time dependance is also important. Thus, the concept of a charge controller with Texas Instruments microcontroller device MSP430G2553 was introduced in the study. The program saved in the ROM microcontroller provides for: charge regime(with a particular algorithm; control and training cycle followed by charging; continuous charge-discharge regime to restore the battery or the study of charge regime algorithms influence on repair effectiveness. The device can perform its functions without being connected to a personal computer, but this connection makes it possible to observe in real time the characteristics of a number of discharge and charge regimes parameters, as well as reading the stored data from microcontroller flash memory and storing these data on the PC hard disk for further analysis. A four stages charging algorithm with reverse charging regime was offered by the author and correctness of algorithm was proved

  2. Maximum swimming speeds of sailfish and three other large marine predatory fish species based on muscle contraction time and stride length: a myth revisited

    Directory of Open Access Journals (Sweden)

    Morten B. S. Svendsen

    2016-10-01

    Full Text Available Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s−1 but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s−1, followed by barracuda (6.2±1.0 m s−1, little tunny (5.6±0.2 m s−1 and dorado (4.0±0.9 m s−1; although size-corrected performance was highest in little tunny and lowest in sailfish. Contrary to previously reported estimates, our results suggest that sailfish are incapable of exceeding swimming speeds of 10-15 m s−1, which corresponds to the speed at which cavitation is predicted to occur, with destructive consequences for fin tissues.

  3. Time-varying extreme value dependence with application to leading European stock markets

    KAUST Repository

    Castro-Camilo, Daniela

    2018-03-09

    Extremal dependence between international stock markets is of particular interest in today’s global financial landscape. However, previous studies have shown this dependence is not necessarily stationary over time. We concern ourselves with modeling extreme value dependence when that dependence is changing over time, or other suitable covariate. Working within a framework of asymptotic dependence, we introduce a regression model for the angular density of a bivariate extreme value distribution that allows us to assess how extremal dependence evolves over a covariate. We apply the proposed model to assess the dynamics governing extremal dependence of some leading European stock markets over the last three decades, and find evidence of an increase in extremal dependence over recent years.

  4. Time-varying extreme value dependence with application to leading European stock markets

    KAUST Repository

    Castro-Camilo, Daniela; de Carvalho, Miguel; Wadsworth, Jennifer

    2018-01-01

    Extremal dependence between international stock markets is of particular interest in today’s global financial landscape. However, previous studies have shown this dependence is not necessarily stationary over time. We concern ourselves with modeling extreme value dependence when that dependence is changing over time, or other suitable covariate. Working within a framework of asymptotic dependence, we introduce a regression model for the angular density of a bivariate extreme value distribution that allows us to assess how extremal dependence evolves over a covariate. We apply the proposed model to assess the dynamics governing extremal dependence of some leading European stock markets over the last three decades, and find evidence of an increase in extremal dependence over recent years.

  5. Time Domain View of Liquid-like Screening and Large Polaron Formation in Lead Halide Perovskites

    Science.gov (United States)

    Joshi, Prakriti Pradhan; Miyata, Kiyoshi; Trinh, M. Tuan; Zhu, Xiaoyang

    The structural softness and dynamic disorder of lead halide perovskites contributes to their remarkable optoelectronic properties through efficient charge screening and large polaron formation. Here we provide a direct time-domain view of the liquid-like structural dynamics and polaron formation in single crystal CH3NH3PbBr3 and CsPbBr3 using femtosecond optical Kerr effect spectroscopy in conjunction with transient reflectance spectroscopy. We investigate structural dynamics as function of pump energy, which enables us to examine the dynamics in the absence and presence of charge carriers. In the absence of charge carriers, structural dynamics are dominated by over-damped picosecond motions of the inorganic PbBr3- sub-lattice and these motions are strongly coupled to band-gap electronic transitions. Carrier injection from across-gap optical excitation triggers additional 0.26 ps dynamics in CH3NH3PbBr3 that can be attributed to the formation of large polarons. In comparison, large polaron formation is slower in CsPbBr3 with a time constant of 0.6 ps. We discuss how such dynamic screening protects charge carriers in lead halide perovskites. US Department of Energy, Office of Science - Basic Energy Sciences.

  6. Optimization and Customer Utilities under Dynamic Lead Time Quotation in an M/M Type Base Stock System

    Directory of Open Access Journals (Sweden)

    Koichi Nakade

    2017-01-01

    Full Text Available In a manufacturing and inventory system, information on production and order lead time helps consumers’ decision whether they receive finished products or not by considering their own impatience on waiting time. In Savaşaneril et al. (2010, the optimal dynamic lead time quotation policy in a one-stage production and inventory system with a base stock policy for maximizing the system’s profit and its properties are discussed. In this system, each arriving customer decides whether he/she enters the system based on the quoted lead time informed by the system. On the other hand, the customer’s utility may be small under the optimal quoted lead time policy because the actual lead time may be longer than the quoted lead time. We use a utility function with respect to benefit of receiving products and waiting time and propose several kinds of heuristic lead time quotation policies. These are compared with optimal policies with respect to both profits and customer’s utilities. Through numerical examples some kinds of heuristic policies have better expected utilities of customers than the optimal quoted lead time policy maximizing system’s profits.

  7. ADVANCEMENTS IN TIME-SPECTRA ANALYSIS METHODS FOR LEAD SLOWING-DOWN SPECTROSCOPY

    International Nuclear Information System (INIS)

    Smith, Leon E.; Anderson, Kevin K.; Gesh, Christopher J.; Shaver, Mark W.

    2010-01-01

    Direct measurement of Pu in spent nuclear fuel remains a key challenge for safeguarding nuclear fuel cycles of today and tomorrow. Lead slowing-down spectroscopy (LSDS) is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic mass with an uncertainty lower than the approximately 10 percent typical of today's confirmatory assay methods. Pacific Northwest National Laboratory's (PNNL) previous work to assess the viability of LSDS for the assay of pressurized water reactor (PWR) assemblies indicated that the method could provide direct assay of Pu-239 and U-235 (and possibly Pu-240 and Pu-241) with uncertainties less than a few percent, assuming suitably efficient instrumentation, an intense pulsed neutron source, and improvements in the time-spectra analysis methods used to extract isotopic information from a complex LSDS signal. This previous simulation-based evaluation used relatively simple PWR fuel assembly definitions (e.g. constant burnup across the assembly) and a constant initial enrichment and cooling time. The time-spectra analysis method was founded on a preliminary analytical model of self-shielding intended to correct for assay-signal nonlinearities introduced by attenuation of the interrogating neutron flux within the assembly.

  8. Simultaneous measurement of the maximum oscillation amplitude and the transient decay time constant of the QCM reveals stiffness changes of the adlayer.

    Science.gov (United States)

    Marxer, C Galli; Coen, M Collaud; Bissig, H; Greber, U F; Schlapbach, L

    2003-10-01

    Interpretation of adsorption kinetics measured with a quartz crystal microbalance (QCM) can be difficult for adlayers undergoing modification of their mechanical properties. We have studied the behavior of the oscillation amplitude, A(0), and the decay time constant, tau, of quartz during adsorption of proteins and cells, by use of a home-made QCM. We are able to measure simultaneously the frequency, f, the dissipation factor, D, the maximum amplitude, A(0), and the transient decay time constant, tau, every 300 ms in liquid, gaseous, or vacuum environments. This analysis enables adsorption and modification of liquid/mass properties to be distinguished. Moreover the surface coverage and the stiffness of the adlayer can be estimated. These improvements promise to increase the appeal of QCM methodology for any applications measuring intimate contact of a dynamic material with a solid surface.

  9. Modularity, Lead time and Return Policy for Supply Chain in Mass Customization System

    Directory of Open Access Journals (Sweden)

    Jizi Li

    2016-12-01

    Full Text Available Mass Customization (MC is a flexible manufacturing system with features of Mass Production (MP and Customization Production (CP. However, there is few researches about competition aamp; cooperation between the upstream MP firm (module manufacturer and downstream CP firm (assembler under MC supply chain scenario. From supply chain perspective, this paper first develops the base models considering the influences of return policy, modularity level, production lead time and pricing factors. Furthermore, according to the different decision-making situations, three kinds of MC supply chain models in competitive or cooperative environment (i.e. simultaneous-move game, sequential-move game and the cooperative game have been built, then, the optimal solution of each model have been analyzed and compared, and coordination mechanism is design to cooperate in MC supply chain via profit-sharing with Nash bargaining power. Finally, through the numerical analysis, we find the highest profit is from the cooperative setting, then followed by in simultaneous-move and sequential-move one, the reason is that the lowest product price and the largest market demand easily occurs in the cooperative game compared with the others, the upstream module manufacturer takes advantage of MP to increase the modularity level and decrease manufacturing cost for the whole supply chain, the downstream assembler task is to shorten the lead time according to customerarsquo;s needs, while the wholesale price in cooperative game higher than simultaneous-move game and sequential-move game can ensure each firmarsquo;s benefits, effectively prevent from the effect of double marginalization and obtain Pareto optimality.

  10. A simulation study of Linsley's approach to infer elongation rate and fluctuations of the EAS maximum depth from muon arrival time distributions

    International Nuclear Information System (INIS)

    Badea, A.F.; Brancus, I.M.; Rebel, H.; Haungs, A.; Oehlschlaeger, J.; Zazyan, M.

    1999-01-01

    The average depth of the maximum X m of the EAS (Extensive Air Shower) development depends on the energy E 0 and the mass of the primary particle, and its dependence from the energy is traditionally expressed by the so-called elongation rate D e defined as change in the average depth of the maximum per decade of E 0 i.e. D e = dX m /dlog 10 E 0 . Invoking the superposition model approximation i.e. assuming that a heavy primary (A) has the same shower elongation rate like a proton, but scaled with energies E 0 /A, one can write X m = X init + D e log 10 (E 0 /A). In 1977 an indirect approach studying D e has been suggested by Linsley. This approach can be applied to shower parameters which do not depend explicitly on the energy of the primary particle, but do depend on the depth of observation X and on the depth X m of shower maximum. The distribution of the EAS muon arrival times, measured at a certain observation level relatively to the arrival time of the shower core reflect the pathlength distribution of the muon travel from locus of production (near the axis) to the observation locus. The basic a priori assumption is that we can associate the mean value or median T of the time distribution to the height of the EAS maximum X m , and that we can express T = f(X,X m ). In order to derive from the energy variation of the arrival time quantities information about elongation rate, some knowledge is required about F i.e. F = - ∂ T/∂X m ) X /∂(T/∂X) X m , in addition to the variations with the depth of observation and the zenith-angle (θ) dependence, respectively. Thus ∂T/∂log 10 E 0 | X = - F·D e ·1/X v ·∂T/∂secθ| E 0 . In a similar way the fluctuations σ(X m ) of X m may be related to the fluctuations σ(T) of T i.e. σ(T) = - σ(X m )· F σ ·1/X v ·∂T/∂secθ| E 0 , with F σ being the corresponding scaling factor for the fluctuation of F. By simulations of the EAS development using the Monte Carlo code CORSIKA the energy and angle

  11. Note: Optimal base-stock policy for the inventory system with periodic review, backorders and sequential lead times

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Thorstenson, Anders

    We show that well-known textbook formulae for determining the optimal base stock of the inventory system with continuous review and constant lead time can easily be extended to the case with periodic review and stochastic, sequential lead times. The provided performance measures and conditions...

  12. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses.

    Science.gov (United States)

    Molloy, Katharine; Griffiths, Timothy D; Chait, Maria; Lavie, Nilli

    2015-12-09

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying "inattentional deafness"--the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼ 100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 "awareness" response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory

  13. Great Physicists - The Life and Times of Leading Physicists from Galileo to Hawking

    International Nuclear Information System (INIS)

    Cropper, William H

    2002-01-01

    The author, a former American chemistry professor, has organized his book into nine parts with 29 chapters, covering, in a fairly historical sequence and systematic conceptual progression, all fundamentals of today's physics: i.e., mechanics, thermodynamics, electromagnetism, statistical mechanics, relativity, quantum mechanics, nuclear physics, particle physics, astronomy-astrophysics-cosmology. Obviously, the 20th century (when about 90% of professional physicists of all time worked) assumes with five topics the dominant role in this enterprise. For each topic, a small number (ranging from one to eight) of leading personalities is selected and the biographies of these 29 physicists, including two women (Marie Curie and Lise Meitner), are presented in some detail together with their achievements in the particular topic. Important relevant contributions of other scholars to each topic are also discussed. In addition, Cropper provides each of the topics with a short 'historical synopsis' justifying his selection of key persons. One may argue that concentrating on leading physicists constitutes an old-fashioned approach to displaying the history and contents of fundamental topics in physics. However, the mixture of biographies and explanation of leading contributions given here will certainly serve for a larger public, not just professional physicists and scientists, as a guide through the exciting development of physical ideas and discoveries. In general, the presentation of the material is quite satisfactory (with only few slips, e.g., in the Meitner story, where the author follows too closely a new biography) and gives the essence of the great advances in physics since the 15th century. One notices perhaps the limitation of the author in cases where no biography in English is available - this would also explain the omission of some of the main contributors to atomic and particle physics, such as Arnold Sommerfeld and Hideki Yukawa, or that French or Russian

  14. BOOK REVIEW: Great Physicists - The Life and Times of Leading Physicists from Galileo to Hawking

    Science.gov (United States)

    Cropper, William H.

    2002-11-01

    The author, a former American chemistry professor, has organized his book into nine parts with 29 chapters, covering, in a fairly historical sequence and systemtic conceptual progression, all fundamentals of today's physics: i.e., mechanics, thermodynamics, electromagnetism, statistical mechanics, relativity, quantum mechanics, nuclear physics, particle physics, astronomy-astrophysics-cosmology. Obviously, the 20th century (when about 90% of professional physicists of all time worked) assumes with five topics the dominant role in this enterprise. For each topic, a small number (ranging from one to eight) of leading personalities is selected and the biographies of these 29 physicists, including two women (Marie Curie and Lise Meitner), are presented in some detail together with their achievements in the particular topic. Important relevant contributions of other scholars to each topic are also discussed. In addition, Cropper provides each of the topics with a short 'historical synopsis' justifying his selection of key persons. One may argue that concentrating on leading physicists constitutes an old-fashioned approach to displaying the history and contents of fundamental topics in physics. However, the mixture of biographies and explanation of leading contributions given here will certainly serve for a larger public, not just professional physicists and scientists, as a guide through the exciting development of physical ideas and discoveries. In general, the presentation of the material is quite satisfactory (with only few slips, e.g., in the Meitner story, where the author follows too closely a new biography) and gives the essence of the great advances in physics since the 15th century. One notices perhaps the limitation of the author in cases where no biography in English is available - this would also explain the omission of some of the main contributors to atomic and particle physics, such as Arnold Sommerfeld and Hideki Yukawa, or that French or Russian readers

  15. Extracting Leading Nonlinear Modes of Changing Climate From Global SST Time Series

    Science.gov (United States)

    Mukhin, D.; Gavrilov, A.; Loskutov, E. M.; Feigin, A. M.; Kurths, J.

    2017-12-01

    Data-driven modeling of climate requires adequate principal variables extracted from observed high-dimensional data. For constructing such variables it is needed to find spatial-temporal patterns explaining a substantial part of the variability and comprising all dynamically related time series from the data. The difficulties of this task rise from the nonlinearity and non-stationarity of the climate dynamical system. The nonlinearity leads to insufficiency of linear methods of data decomposition for separating different processes entangled in the observed time series. On the other hand, various forcings, both anthropogenic and natural, make the dynamics non-stationary, and we should be able to describe the response of the system to such forcings in order to separate the modes explaining the internal variability. The method we present is aimed to overcome both these problems. The method is based on the Nonlinear Dynamical Mode (NDM) decomposition [1,2], but takes into account external forcing signals. An each mode depends on hidden, unknown a priori, time series which, together with external forcing time series, are mapped onto data space. Finding both the hidden signals and the mapping allows us to study the evolution of the modes' structure in changing external conditions and to compare the roles of the internal variability and forcing in the observed behavior. The method is used for extracting of the principal modes of SST variability on inter-annual and multidecadal time scales accounting the external forcings such as CO2, variations of the solar activity and volcanic activity. The structure of the revealed teleconnection patterns as well as their forecast under different CO2 emission scenarios are discussed.[1] Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. [2] Gavrilov, A., Mukhin, D., Loskutov, E., Volodin, E., Feigin, A., & Kurths, J. (2016

  16. Coupled Heuristic Prediction of Long Lead-Time Accumulated Total Inflow of a Reservoir during Typhoons Using Deterministic Recurrent and Fuzzy Inference-Based Neural Network

    Directory of Open Access Journals (Sweden)

    Chien-Lin Huang

    2015-11-01

    Full Text Available This study applies Real-Time Recurrent Learning Neural Network (RTRLNN and Adaptive Network-based Fuzzy Inference System (ANFIS with novel heuristic techniques to develop an advanced prediction model of accumulated total inflow of a reservoir in order to solve the difficulties of future long lead-time highly varied uncertainty during typhoon attacks while using a real-time forecast. For promoting the temporal-spatial forecasted precision, the following original specialized heuristic inputs were coupled: observed-predicted inflow increase/decrease (OPIID rate, total precipitation, and duration from current time to the time of maximum precipitation and direct runoff ending (DRE. This study also investigated the temporal-spatial forecasted error feature to assess the feasibility of the developed models, and analyzed the output sensitivity of both single and combined heuristic inputs to determine whether the heuristic model is susceptible to the impact of future forecasted uncertainty/errors. Validation results showed that the long lead-time–predicted accuracy and stability of the RTRLNN-based accumulated total inflow model are better than that of the ANFIS-based model because of the real-time recurrent deterministic routing mechanism of RTRLNN. Simulations show that the RTRLNN-based model with coupled heuristic inputs (RTRLNN-CHI, average error percentage (AEP/average forecast lead-time (AFLT: 6.3%/49 h can achieve better prediction than the model with non-heuristic inputs (AEP of RTRLNN-NHI and ANFIS-NHI: 15.2%/31.8% because of the full consideration of real-time hydrological initial/boundary conditions. Besides, the RTRLNN-CHI model can promote the forecasted lead-time above 49 h with less than 10% of AEP which can overcome the previous forecasted limits of 6-h AFLT with above 20%–40% of AEP.

  17. Ensemble hydrological forecast efficiency evolution over various issue dates and lead-time: case study for the Cheboksary reservoir (Volga River)

    Science.gov (United States)

    Gelfan, Alexander; Moreido, Vsevolod

    2017-04-01

    Ensemble hydrological forecasting allows for describing uncertainty caused by variability of meteorological conditions in the river basin for the forecast lead-time. At the same time, in snowmelt-dependent river basins another significant source of uncertainty relates to variability of initial conditions of the basin (snow water equivalent, soil moisture content, etc.) prior to forecast issue. Accurate long-term hydrological forecast is most crucial for large water management systems, such as the Cheboksary reservoir (the catchment area is 374 000 sq.km) located in the Middle Volga river in Russia. Accurate forecasts of water inflow volume, maximum discharge and other flow characteristics are of great value for this basin, especially before the beginning of the spring freshet season that lasts here from April to June. The semi-distributed hydrological model ECOMAG was used to develop long-term ensemble forecast of daily water inflow into the Cheboksary reservoir. To describe variability of the meteorological conditions and construct ensemble of possible weather scenarios for the lead-time of the forecast, two approaches were applied. The first one utilizes 50 weather scenarios observed in the previous years (similar to the ensemble streamflow prediction (ESP) procedure), the second one uses 1000 synthetic scenarios simulated by a stochastic weather generator. We investigated the evolution of forecast uncertainty reduction, expressed as forecast efficiency, over various consequent forecast issue dates and lead time. We analyzed the Nash-Sutcliffe efficiency of inflow hindcasts for the period 1982 to 2016 starting from 1st of March with 15 days frequency for lead-time of 1 to 6 months. This resulted in the forecast efficiency matrix with issue dates versus lead-time that allows for predictability identification of the basin. The matrix was constructed separately for observed and synthetic weather ensembles.

  18. Time trends in burdens of cadmium, lead, and mercury in the population of northern Sweden

    International Nuclear Information System (INIS)

    Wennberg, Maria; Lundh, Thomas; Bergdahl, Ingvar A.; Hallmans, Goeran; Jansson, Jan-Hakan; Stegmayr, Birgitta; Custodio, Hipolito M.; Skerfving, Staffan

    2006-01-01

    The time trends of exposure to heavy metals are not adequately known. This is a worldwide problem with regard to the basis for preventive actions and evaluation of their effects. This study addresses time trends for the three toxic elements cadmium (Cd), mercury (Hg), and lead (Pb). Concentrations in erythrocytes (Ery) were determined in a subsample of the population-based MONICA surveys from 1990, 1994, and 1999 in a total of 600 men and women aged 25-74 years. The study took place in the two northernmost counties in Sweden. To assess the effect of changes in the environment, adjustments were made for life-style factors that are determinants of exposure. Annual decreases of 5-6% were seen for Ery-Pb levels (adjusted for age and changes in alcohol intake) and Ery-Hg levels (adjusted for age and changes in fish intake). Ery-Cd levels (adjusted for age) showed a similar significant decrease in smoking men. It is concluded that for Pb and maybe also Hg the actions against pollution during recent decades have caused a rapid decrease of exposure; for Hg the decreased use of dental amalgam may also have had an influence. For Cd, the decline in Ery-Cd was seen only in smokers, indicating that Cd exposure from tobacco has decreased, while other environmental sources of Cd have not changed significantly. To further improve the health status in Sweden, it is important to decrease the pollution of Cd, and actions against smoking in the community are important

  19. Quantifying lead-time bias in risk factor studies of cancer through simulation.

    Science.gov (United States)

    Jansen, Rick J; Alexander, Bruce H; Anderson, Kristin E; Church, Timothy R

    2013-11-01

    Lead-time is inherent in early detection and creates bias in observational studies of screening efficacy, but its potential to bias effect estimates in risk factor studies is not always recognized. We describe a form of this bias that conventional analyses cannot address and develop a model to quantify it. Surveillance Epidemiology and End Results (SEER) data form the basis for estimates of age-specific preclinical incidence, and log-normal distributions describe the preclinical duration distribution. Simulations assume a joint null hypothesis of no effect of either the risk factor or screening on the preclinical incidence of cancer, and then quantify the bias as the risk-factor odds ratio (OR) from this null study. This bias can be used as a factor to adjust observed OR in the actual study. For this particular study design, as average preclinical duration increased, the bias in the total-physical activity OR monotonically increased from 1% to 22% above the null, but the smoking OR monotonically decreased from 1% above the null to 5% below the null. The finding of nontrivial bias in fixed risk-factor effect estimates demonstrates the importance of quantitatively evaluating it in susceptible studies. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Assessment of realistic nowcasting lead-times based on predictability analysis of Mediterranean Heavy Precipitation Events

    Science.gov (United States)

    Bech, Joan; Berenguer, Marc

    2014-05-01

    Operational quantitative precipitation forecasts (QPF) are provided routinely by weather services or hydrological authorities, particularly those responsible for densely populated regions of small catchments, such as those typically found in Mediterranean areas prone to flash-floods. Specific rainfall values are used as thresholds for issuing warning levels considering different time frameworks (mid-range, short-range, 24h, 1h, etc.), for example 100 mm in 24h or 60 mm in 1h. There is a clear need to determine how feasible is a specific rainfall value for a given lead-time, in particular for very short range forecasts or nowcasts typically obtained from weather radar observations (Pierce et al 2012). In this study we assess which specific nowcast lead-times can be provided for a number of heavy precipitation events (HPE) that affected Catalonia (NE Spain). The nowcasting system we employed generates QPFs through the extrapolation of rainfall fields observed with weather radar following a Lagrangian approach developed and tested successfully in previous studies (Berenguer et al. 2005, 2011).Then QPFs up to 3h are compared with two quality controlled observational data sets: weather radar quantitative precipitation estimates (QPE) and raingauge data. Several high-impact weather HPE were selected including the 7 September 2005 Llobregat Delta river tornado outbreak (Bech et al. 2007) or the 2 November 2008 supercell tornadic thunderstorms (Bech et al. 2011) both producing, among other effects, local flash floods. In these two events there were torrential rainfall rates (30' amounts exceeding 38.2 and 12.3 mm respectively) and 24h accumulation values above 100 mm. A number of verification scores are used to characterize the evolution of precipitation forecast quality with time, which typically presents a decreasing trend but showing an strong dependence on the selected rainfall threshold and integration period. For example considering correlation factors, 30

  1. Computational procedure of optimal inventory model involving controllable backorder rate and variable lead time with defective units

    Science.gov (United States)

    Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling

    2012-10-01

    This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.

  2. Optimal Ordering Policy and Coordination Mechanism of a Supply Chain with Controllable Lead-Time-Dependent Demand Forecast

    Directory of Open Access Journals (Sweden)

    Hua-Ming Song

    2011-01-01

    Full Text Available This paper investigates the ordering decisions and coordination mechanism for a distributed short-life-cycle supply chain. The objective is to maximize the whole supply chain's expected profit and meanwhile make the supply chain participants achieve a Pareto improvement. We treat lead time as a controllable variable, thus the demand forecast is dependent on lead time: the shorter lead time, the better forecast. Moreover, optimal decision-making models for lead time and order quantity are formulated and compared in the decentralized and centralized cases. Besides, a three-parameter contract is proposed to coordinate the supply chain and alleviate the double margin in the decentralized scenario. In addition, based on the analysis of the models, we develop an algorithmic procedure to find the optimal ordering decisions. Finally, a numerical example is also presented to illustrate the results.

  3. Real-Time Salmonella Detection Using Lead Zirconate Titanate-Titanium Microcantilevers

    National Research Council Canada - National Science Library

    McGovern, John-Paul; Shih, Wan Y; Shih, Wei-Heng; Sergi, Mauro; Chaiken, Irwin

    2005-01-01

    .... We have developed and investigated the use of a lead zirconate titanate - titanium (PZT-Ti) microcantilever for in situ detection of the common food- and water-born pathogen, Salmonella typhimurium...

  4. Evaluation of adaptation to visually induced motion sickness based on the maximum cross-correlation between pulse transmission time and heart rate

    Directory of Open Access Journals (Sweden)

    Chiba Shigeru

    2007-09-01

    Full Text Available Abstract Background Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. Methods An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index ρmax, which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. Results The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in ρmax with time. Conclusion The physiological index, ρmax, will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.

  5. Maximum swimming speeds of sailfish and three other large marine predatory fish species based on muscle contraction time and stride length

    DEFF Research Database (Denmark)

    Svendsen, Morten Bo Søndergaard; Domenici, Paolo; Marras, Stefano

    2016-01-01

    Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s(-1) but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish...

  6. 24 CFR 203.18c - One-time or up-front mortgage insurance premium excluded from limitations on maximum mortgage...

    Science.gov (United States)

    2010-04-01

    ... insurance premium excluded from limitations on maximum mortgage amounts. 203.18c Section 203.18c Housing and...-front mortgage insurance premium excluded from limitations on maximum mortgage amounts. After... LOAN INSURANCE PROGRAMS UNDER NATIONAL HOUSING ACT AND OTHER AUTHORITIES SINGLE FAMILY MORTGAGE...

  7. Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY12 Status Report

    Energy Technology Data Exchange (ETDEWEB)

    Kulisek, Jonathan A.; Anderson, Kevin K.; Casella, Andrew M.; Siciliano, Edward R.; Warren, Glen A.

    2012-09-28

    Executive Summary Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today’s confirmatory methods. This document is a progress report for FY2012 PNNL analysis and algorithm development. Progress made by PNNL in FY2012 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel assemblies. PNNL further refined the semi-empirical model developed in FY2011 based on singular value decomposition (SVD) to numerically account for the effects of self-shielding. The average uncertainty in the Pu mass across the NGSI-64 fuel assemblies was shown to be less than 3% using only six calibration assemblies with a 2% uncertainty in the isotopic masses. When calibrated against the six NGSI-64 fuel assemblies, the algorithm was able to determine the total Pu mass within <2% uncertainty for the 27 diversion cases also developed under NGSI. Two purely empirical algorithms were developed that do not require the use of Pu isotopic fission chambers. The semi-empirical and purely empirical algorithms were successfully tested using MCNPX simulations as well applied to experimental data measured by RPI using their LSDS. The algorithms were able to describe the 235U masses of the RPI measurements with an average uncertainty of 2.3%. Analyses were conducted that provided valuable insight with regard to design requirements (e

  8. Evaluating the Effect of Lead Time on Quality Service Delivery in the Banking Industry in Kumasi Metropolis of Ghana

    Directory of Open Access Journals (Sweden)

    Stephen Okyere

    2015-07-01

    Full Text Available Customers are becoming more attracted to quality service delivery and are being impatience and unsatisfied when they had to be delayed or wait for longer times before they are served.  Hence, Quality Service Delivery is of utmost importance to every service organisation especially financial industry. Most financial institutions focus attention on product innovation at the expense of lead time management which is a major factor in ensuring service quality and customer satisfaction. Consequently, this research looks at evaluating the effect of lead time on quality service delivery in the Banking Industry in Kumasi Metropolis of Ghana. The study relied on Primary data collected through questionnaires, observation and interview instruments, administered to staff and customers of some selected branches of a commercial bank in the study area. The data was analysed qualitatively. The researchers realised that despite the immense importance of lead time on quality service delivery, little attention is given to the concept. It was revealed that, customers were dissatisfied with the commercial bank’s services as a result of the unnecessary delays and queuing at the bank premises. The long lead time was found to be attributable to plant/system failure, skill gap on the part of employees, ATM underutilization and frequent breakdowns, among others. This has consequently resulted into long lead time, waiting, queuing and unnecessary delay at the banking hall. It is recommended that Tellers should be provided with electronic card readers for verification of customer’s data and processing to be faster.

  9. Lead Time to Appointment and No-Show Rates for New and Follow-up Patients in an Ambulatory Clinic.

    Science.gov (United States)

    Drewek, Rupali; Mirea, Lucia; Adelson, P David

    High rates of no-shows in outpatient clinics are problematic for revenue and for quality of patient care. Longer lead time to appointment has variably been implicated as a risk factor for no-shows, but the evidence within pediatric clinics is inconclusive. The goal of this study was to estimate no-show rates and test for association between appointment lead time and no-show rates for new and follow-up patients. Analyses included 534 new and 1920 follow-up patients from pulmonology and gastroenterology clinics at a freestanding children's hospital. The overall rate of no-shows was lower for visits scheduled within 0 to 30 days compared with 30 days or more (23% compared with 47%, P < .0001). Patient type significantly modified the association of appointment lead time; the rate of no-shows was higher (30%) among new patients compared with (21%) follow-up patients with appointments scheduled within 30 days (P = .004). For appointments scheduled 30 or more days' lead time, no-show rates were statistically similar for new patients (46%) and follow-up patients (0.48%). Time to appointment is a risk factor associated with no-shows, and further study is needed to identify and implement effective approaches to reduce appointment lead time, especially for new patients in pediatric subspecialties.

  10. Prospective Predictors of Suicidality: Defeat and Entrapment Lead to Changes in Suicidal Ideation over Time

    Science.gov (United States)

    Taylor, Peter James; Gooding, Patricia A.; Wood, Alex M.; Johnson, Judith; Tarrier, Nicholas

    2011-01-01

    Theoretical perspectives into suicidality have suggested that heightened perceptions of defeat and entrapment lead to suicidality. However, all previous empirical work has been cross-sectional. We provide the first longitudinal test of the theoretical predictions, in a sample of 79 students who reported suicidality. Participants completed…

  11. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    Science.gov (United States)

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  12. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  13. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  14. Engaging narratives evoke similar neural activity and lead to similar time perception.

    Science.gov (United States)

    Cohen, Samantha S; Henin, Simon; Parra, Lucas C

    2017-07-04

    It is said that we lose track of time - that "time flies" - when we are engrossed in a story. How does engagement with the story cause this distorted perception of time, and what are its neural correlates? People commit both time and attentional resources to an engaging stimulus. For narrative videos, attentional engagement can be represented as the level of similarity between the electroencephalographic responses of different viewers. Here we show that this measure of neural engagement predicted the duration of time that viewers were willing to commit to narrative videos. Contrary to popular wisdom, engagement did not distort the average perception of time duration. Rather, more similar brain responses resulted in a more uniform perception of time across viewers. These findings suggest that by capturing the attention of an audience, narrative videos bring both neural processing and the subjective perception of time into synchrony.

  15. Contribution of National near Real Time MODIS Forest Maximum Percentage NDVI Change Products to the U.S. ForWarn System

    Science.gov (United States)

    Spruce, Joseph P.; Hargrove, William; Gasser, Gerald; Smoot, James; Kuper, Philip D.

    2012-01-01

    This presentation reviews the development, integration, and testing of Near Real Time (NRT) MODIS forest % maximum NDVI change products resident to the USDA Forest Service (USFS) ForWarn System. ForWarn is an Early Warning System (EWS) tool for detection and tracking of regionally evident forest change, which includes the U.S. Forest Change Assessment Viewer (FCAV) (a publically available on-line geospatial data viewer for visualizing and assessing the context of this apparent forest change). NASA Stennis Space Center (SSC) is working collaboratively with the USFS, ORNL, and USGS to contribute MODIS forest change products to ForWarn. These change products compare current NDVI derived from expedited eMODIS data, to historical NDVI products derived from MODIS MOD13 data. A new suite of forest change products are computed every 8 days and posted to the ForWarn system; this includes three different forest change products computed using three different historical baselines: 1) previous year; 2) previous three years; and 3) all previous years in the MODIS record going back to 2000. The change product inputs are maximum value NDVI that are composited across a 24 day interval and refreshed every 8 days so that resulting images for the conterminous U.S. are predominantly cloud-free yet still retain temporally relevant fresh information on changes in forest canopy greenness. These forest change products are computed at the native nominal resolution of the input reflectance bands at 231.66 meters, which equates to approx 5.4 hectares or 13.3 acres per pixel. The Time Series Product Tool, a MATLAB-based software package developed at NASA SSC, is used to temporally process, fuse, reduce noise, interpolate data voids, and re-aggregate the historical NDVI into 24 day composites, and then custom MATLAB scripts are used to temporally process the eMODIS NDVIs so that they are in synch with the historical NDVI products. Prior to posting, an in-house snow mask classification product

  16. Time required to achieve maximum concentration of amikacin in synovial fluid of the distal interphalangeal joint after intravenous regional limb perfusion in horses.

    Science.gov (United States)

    Kilcoyne, Isabelle; Nieto, Jorge E; Knych, Heather K; Dechant, Julie E

    2018-03-01

    OBJECTIVE To determine the maximum concentration (Cmax) of amikacin and time to Cmax (Tmax) in the distal interphalangeal (DIP) joint in horses after IV regional limb perfusion (IVRLP) by use of the cephalic vein. ANIMALS 9 adult horses. PROCEDURES Horses were sedated and restrained in a standing position and then subjected to IVRLP (2 g of amikacin sulfate diluted to 60 mL with saline [0.9% NaCl] solution) by use of the cephalic vein. A pneumatic tourniquet was placed 10 cm proximal to the accessory carpal bone. Perfusate was instilled with a peristaltic pump over a 3-minute period. Synovial fluid was collected from the DIP joint 5, 10, 15, 20, 25, and 30 minutes after IVRLP; the tourniquet was removed after the 20-minute sample was collected. Blood samples were collected from the jugular vein 5, 10, 15, 19, 21, 25, and 30 minutes after IVRLP. Amikacin was quantified with a fluorescence polarization immunoassay. Median Cmax of amikacin and Tmax in the DIP joint were determined. RESULTS 2 horses were excluded because an insufficient volume of synovial fluid was collected. Median Cmax for the DIP joint was 600 μg/mL (range, 37 to 2,420 μg/mL). Median Tmax for the DIP joint was 15 minutes. CONCLUSIONS AND CLINICAL RELEVANCE Tmax of amikacin was 15 minutes after IVRLP in horses and Cmax did not increase > 15 minutes after IVRLP despite maintenance of the tourniquet. Application of a tourniquet for 15 minutes should be sufficient for completion of IVRLP when attempting to achieve an adequate concentration of amikacin in the synovial fluid of the DIP joint.

  17. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  18. Increased Total Anesthetic Time Leads to Higher Rates of Surgical Site Infections in Spinal Fusions.

    Science.gov (United States)

    Puffer, Ross C; Murphy, Meghan; Maloney, Patrick; Kor, Daryl; Nassr, Ahmad; Freedman, Brett; Fogelson, Jeremy; Bydon, Mohamad

    2017-06-01

    A retrospective review of a consecutive series of spinal fusions comparing patient and procedural characteristics of patients who developed surgical site infections (SSIs) after spinal fusion. It is known that increased surgical time (incision to closure) is associated with a higher rate of postoperative SSIs. We sought to determine whether increased total anesthetic time (intubation to extubation) is a factor in the development of SSIs as well. In spine surgery for deformity and degenerative disease, SSI has been associated with operative time, revealing a nearly 10-fold increase in SSI rates in prolonged surgery. Surgical time is associated with infections in other surgical disciplines as well. No studies have reported whether total anesthetic time (intubation to extubation) has an association with SSIs. Surgical records were searched in a retrospective fashion to identify all spine fusion procedures performed between January 2010 and July 2012. All SSIs during that timeframe were recorded and compared with the list of cases performed between 2010 and 2012 in a case-control design. There were 20 (1.7%) SSIs in this fusion cohort. On univariate analyses of operative factors, there was a significant association between total anesthetic time (Infection 7.6 ± 0.5 hrs vs. no infection -6.0 ± 0.1 hrs, P operative time (infection 5.5 ± 0.4 hrs vs. no infection - 4.4 ± 0.06 hrs, P infections, whereas level of pathology and emergent surgery were not significant. On multivariate logistic analysis, BMI and total anesthetic time remained independent predictors of SSI whereas ASA status and operative time did not. Increasing BMI and total anesthetic time were independent predictors of SSIs in this cohort of over 1000 consecutive spinal fusions. 3.

  19. Errors in Postural Preparation Lead to Increased Choice Reaction Times for Step Initiation in Older Adults

    Science.gov (United States)

    Nutt, John G.; Horak, Fay B.

    2011-01-01

    Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431

  20. Time-lapse imaging of neural development: zebrafish lead the way into the fourth dimension.

    Science.gov (United States)

    Rieger, Sandra; Wang, Fang; Sagasti, Alvaro

    2011-07-01

    Time-lapse imaging is often the only way to appreciate fully the many dynamic cell movements critical to neural development. Zebrafish possess many advantages that make them the best vertebrate model organism for live imaging of dynamic development events. This review will discuss technical considerations of time-lapse imaging experiments in zebrafish, describe selected examples of imaging studies in zebrafish that revealed new features or principles of neural development, and consider the promise and challenges of future time-lapse studies of neural development in zebrafish embryos and adults. Copyright © 2011 Wiley-Liss, Inc.

  1. Matching times of leading and following suggest cooperation through direct reciprocity during V-formation flight in ibis.

    Science.gov (United States)

    Voelkl, Bernhard; Portugal, Steven J; Unsöld, Markus; Usherwood, James R; Wilson, Alan M; Fritz, Johannes

    2015-02-17

    One conspicuous feature of several larger bird species is their annual migration in V-shaped or echelon formation. When birds are flying in these formations, energy savings can be achieved by using the aerodynamic up-wash produced by the preceding bird. As the leading bird in a formation cannot profit from this up-wash, a social dilemma arises around the question of who is going to fly in front? To investigate how this dilemma is solved, we studied the flight behavior of a flock of juvenile Northern bald ibis (Geronticus eremita) during a human-guided autumn migration. We could show that the amount of time a bird is leading a formation is strongly correlated with the time it can itself profit from flying in the wake of another bird. On the dyadic level, birds match the time they spend in the wake of each other by frequent pairwise switches of the leading position. Taken together, these results suggest that bald ibis cooperate by directly taking turns in leading a formation. On the proximate level, we propose that it is mainly the high number of iterations and the immediacy of reciprocation opportunities that favor direct reciprocation. Finally, we found evidence that the animals' propensity to reciprocate in leading has a substantial influence on the size and cohesion of the flight formations.

  2. EU ambition to build the world's leading bioeconomy-Uncertain times demand innovative and sustainable solutions.

    Science.gov (United States)

    Bell, John; Paula, Lino; Dodd, Thomas; Németh, Szilvia; Nanou, Christina; Mega, Voula; Campos, Paula

    2018-01-25

    This article outlines the current context and the development of the European Bioeconomy Strategy. It analyses the current situation, challenges and needs for EU action and concludes with the next steps that the European Commission will undertake to review and update the Bioeconomy Strategy. Bioeconomy offers great opportunities to realising a competitive, circular and sustainable economy with a sound industrial base that is less dependent on fossil carbon. A sustainable bioeconomy also contributes to climate change mitigation, with oceans, forests and soils being major carbon sinks and fostering negative CO 2 emissions. The EU has invested significantly in research and innovation in this field and the European Commission is committed to lead on European bioeconomy strategy. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Incidental colonic focal FDG uptake on PET/CT: can the maximum standardized uptake value (SUVmax) guide us in the timing of colonoscopy?

    NARCIS (Netherlands)

    van Hoeij, F. B.; Keijsers, R. G. M.; Loffeld, B. C. A. J.; Dun, G.; Stadhouders, P. H. G. M.; Weusten, B. L. A. M.

    2015-01-01

    In patients undergoing F-18-FDG PET/CT, incidental colonic focal lesions can be indicative of inflammatory, premalignant or malignant lesions. The maximum standardized uptake value (SUVmax) of these lesions, representing the FDG uptake intensity, might be helpful in differentiating malignant from

  4. The dynamics of suspended particulate matter (SPM) and chlorophyll- a from intratidal to annual time scales in a coastal turbidity maximum

    NARCIS (Netherlands)

    van der Hout, C.M.; Witbaard, R.; Bergman, M.J.N.; Duineveld, G.C.A.; Rozemeijer, M.J.C.; Gerkema, T.

    2017-01-01

    The analysis of 1.8 years of data gives an understanding of the response to varying forcing of suspended particulate matter (SPM) and chlorophyll-a (CHL-a) in a coastal turbidity maximum zone (TMZ). Both temporal and vertical concentration variations in the near-bed layer (0–2 m) in the shallow (11

  5. The dynamics of suspended particulate matter (SPM) and chlorophyll-a from intratidal to annual time scales in a coastal turbidity maximum

    NARCIS (Netherlands)

    Hout, van der C.M.; Witbaard, R.; Bergman, M.J.N.; Duineveld, G.C.A.; Rozemeijer, M.J.C.; Gerkema, T.

    2017-01-01

    The analysis of 1.8. years of data gives an understanding of the response to varying forcing of suspended particulate matter (SPM) and chlorophyll-a (CHL-a) in a coastal turbidity maximum zone (TMZ). Both temporal and vertical concentration variations in the near-bed layer (0-2. m) in the shallow

  6. Extending flood forecasting lead time in a large watershed by coupling WRF QPF with a distributed hydrological model

    Science.gov (United States)

    Li, Ji; Chen, Yangbo; Wang, Huanyu; Qin, Jianming; Li, Jie; Chiao, Sen

    2017-03-01

    Long lead time flood forecasting is very important for large watershed flood mitigation as it provides more time for flood warning and emergency responses. The latest numerical weather forecast model could provide 1-15-day quantitative precipitation forecasting products in grid format, and by coupling this product with a distributed hydrological model could produce long lead time watershed flood forecasting products. This paper studied the feasibility of coupling the Liuxihe model with the Weather Research and Forecasting quantitative precipitation forecast (WRF QPF) for large watershed flood forecasting in southern China. The QPF of WRF products has three lead times, including 24, 48 and 72 h, with the grid resolution being 20 km  × 20 km. The Liuxihe model is set up with freely downloaded terrain property; the model parameters were previously optimized with rain gauge observed precipitation, and re-optimized with the WRF QPF. Results show that the WRF QPF has bias with the rain gauge precipitation, and a post-processing method is proposed to post-process the WRF QPF products, which improves the flood forecasting capability. With model parameter re-optimization, the model's performance improves also. This suggests that the model parameters be optimized with QPF, not the rain gauge precipitation. With the increasing of lead time, the accuracy of the WRF QPF decreases, as does the flood forecasting capability. Flood forecasting products produced by coupling the Liuxihe model with the WRF QPF provide a good reference for large watershed flood warning due to its long lead time and rational results.

  7. How does development lead time affect performance over the ramp-up lifecycle? : evidence from the consumer electronics industry

    NARCIS (Netherlands)

    Pufall, A.A.; Fransoo, J.C.; Jong, de A.; Kok, de A.G.

    2012-01-01

    In the fast-paced world of consumer electronics, short development lead times and efficient product ramp-ups are invaluable. The sooner and faster a firm can ramp-up production of a new product, the faster it can start to earn revenues, profit from early market opportunities, establish technology

  8. Observing expertise-related actions leads to perfect time flow estimations.

    Directory of Open Access Journals (Sweden)

    Yin-Hua Chen

    Full Text Available The estimation of the time of exposure of a picture portraying an action increases as a function of the amount of movement implied in the action represented. This effect suggests that the perceiver creates an internal embodiment of the action observed as if internally simulating the entire movement sequence. Little is known however about the timing accuracy of these internal action simulations, specifically whether they are affected by the level of familiarity and experience that the observer has of the action. In this study we asked professional pianists to reproduce different durations of exposure (shorter or longer than one second of visual displays both specific (a hand in piano-playing action and non-specific to their domain of expertise (a hand in finger-thumb opposition and scrambled-pixels and compared their performance with non-pianists. Pianists outperformed non-pianists independently of the time of exposure of the stimuli; remarkably the group difference was particularly magnified by the pianists' enhanced accuracy and stability only when observing the hand in the act of playing the piano. These results for the first time provide evidence that through musical training, pianists create a selective and self-determined dynamic internal representation of an observed movement that allows them to estimate precisely its temporal duration.

  9. Optimal provisioning strategies for slow moving spare parts with small lead times

    NARCIS (Netherlands)

    Teunter, R.H.; Klein Haneveld, W.K.

    When an expensive piece of equipment is bought, spare parts can often be bought at a reduced price. A decision must be made about the initial provisioning of spare parts. Furthermore, if at a certain time the stock drops to zero, because a number of failures have occurred, a decision must be made

  10. Action and Perception Are Temporally Coupled by a Common Mechanism That Leads to a Timing Misperception

    Science.gov (United States)

    Astefanoaei, Corina; Daye, Pierre M.; FitzGibbon, Edmond J.; Creanga, Dorina-Emilia; Rufa, Alessandra; Optican, Lance M.

    2015-01-01

    We move our eyes to explore the world, but visual areas determining where to look next (action) are different from those determining what we are seeing (perception). Whether, or how, action and perception are temporally coordinated is not known. The preparation time course of an action (e.g., a saccade) has been widely studied with the gap/overlap paradigm with temporal asynchronies (TA) between peripheral target onset and fixation point offset (gap, synchronous, or overlap). However, whether the subjects perceive the gap or overlap, and when they perceive it, has not been studied. We adapted the gap/overlap paradigm to study the temporal coupling of action and perception. Human subjects made saccades to targets with different TAs with respect to fixation point offset and reported whether they perceived the stimuli as separated by a gap or overlapped in time. Both saccadic and perceptual report reaction times changed in the same way as a function of TA. The TA dependencies of the time change for action and perception were very similar, suggesting a common neural substrate. Unexpectedly, in the perceptual task, subjects misperceived lights overlapping by less than ∼100 ms as separated in time (overlap seen as gap). We present an attention-perception model with a map of prominence in the superior colliculus that modulates the stimulus signal's effectiveness in the action and perception pathways. This common source of modulation determines how competition between stimuli is resolved, causes the TA dependence of action and perception to be the same, and causes the misperception. PMID:25632126

  11. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  12. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  13. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  14. Effects of aging time on the mechanical properties of Sn–9Zn–1.5Ag–xBi lead-free solder alloys

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chih-Yao [Department of Materials Science and Engineering, National Cheng Kung University, 1 Ta-Hsueh Road, Tainan 70101, Taiwan (China); Hon, Min-Hsiung [Department of Materials Science and Engineering, National Cheng Kung University, 1 Ta-Hsueh Road, Tainan 70101, Taiwan (China); Department of Mechanical Engineering, National Kaohsiung University of Applied Sciences, 415 Chien-Kung Road, Kaohsiung 80782, Taiwan (China); Wang, Moo-Chin, E-mail: mcwang@kmu.edu.tw [Department of Fragrance and Cosmetic Science, Kaohsiung Medical University, 100, Shih-Chuan 1st Road, Kaohsiung 80728, Taiwan (China); Chen, Ying-Ru; Chang, Kuo-Ming; Li, Wang-Long [Institute of Nanotechnology and Microsystems Engineering, National Cheng Kung University, No. 1, University Road, Tainan 70101, Taiwan (China)

    2014-01-05

    Highlights: • The microstructure of these solder alloys are composed of Sn-rich phase and Ag{sub 3}Sn. • The grain size of Sn–9Zn–1.5Ag–xBi solder alloys increases with rose aging time. • The maximum yield strength is 112.7 ± 2.2 MPa for Sn–9Zn–1.5Ag–3Bi solder alloys. • TEM observed that Bi appears as oblong shape fine particles. -- Abstract: The effects of aging time on the mechanical properties of the Sn–9Zn–1.5Ag–xBi lead-free solder alloys are investigated using scanning electron microscopy (SEM), X-ray diffraction (XRD), transmission electron microscopy (TEM), selected area electron diffraction (SAED), energy dispersive spectrometry (EDS) and a universal testing machine. The experimental results show that the microstructure of Sn–9Zn–1.5Ag–xBi solder alloys is composed of Sn-rich phase and AgZn{sub 3}. No other intermetallic compounds (IMCs) with Bi content was observed in the solder matrix for Sn–9Zn–1.5Ag solder alloys with various Bi contents before and after aging at 150 °C for different durations. The lattice parameter increases significantly with increasing aging time or Bi addition. The size of Sn-rich grain increased gradually with aging time increased, but decreases with Bi content increases. The maximum yield strength is 112.7 ± 2.2 MPa for Sn–9Zn–1.5Ag–3Bi solder alloy before aging.

  15. Effects of aging time on the mechanical properties of Sn–9Zn–1.5Ag–xBi lead-free solder alloys

    International Nuclear Information System (INIS)

    Liu, Chih-Yao; Hon, Min-Hsiung; Wang, Moo-Chin; Chen, Ying-Ru; Chang, Kuo-Ming; Li, Wang-Long

    2014-01-01

    Highlights: • The microstructure of these solder alloys are composed of Sn-rich phase and Ag 3 Sn. • The grain size of Sn–9Zn–1.5Ag–xBi solder alloys increases with rose aging time. • The maximum yield strength is 112.7 ± 2.2 MPa for Sn–9Zn–1.5Ag–3Bi solder alloys. • TEM observed that Bi appears as oblong shape fine particles. -- Abstract: The effects of aging time on the mechanical properties of the Sn–9Zn–1.5Ag–xBi lead-free solder alloys are investigated using scanning electron microscopy (SEM), X-ray diffraction (XRD), transmission electron microscopy (TEM), selected area electron diffraction (SAED), energy dispersive spectrometry (EDS) and a universal testing machine. The experimental results show that the microstructure of Sn–9Zn–1.5Ag–xBi solder alloys is composed of Sn-rich phase and AgZn 3 . No other intermetallic compounds (IMCs) with Bi content was observed in the solder matrix for Sn–9Zn–1.5Ag solder alloys with various Bi contents before and after aging at 150 °C for different durations. The lattice parameter increases significantly with increasing aging time or Bi addition. The size of Sn-rich grain increased gradually with aging time increased, but decreases with Bi content increases. The maximum yield strength is 112.7 ± 2.2 MPa for Sn–9Zn–1.5Ag–3Bi solder alloy before aging

  16. Effective production planning for purchased part under long lead time and uncertain demand: MRP Vs demand-driven MRP

    Science.gov (United States)

    Shofa, M. J.; Moeis, A. O.; Restiana, N.

    2018-04-01

    MRP as a production planning system is appropriate for the deterministic environment. Unfortunately, most production systems such as customer demands are stochastic, so that MRP is inappropriate at the time. Demand-Driven MRP (DDMRP) is new approach for production planning system dealing with demand uncertainty. The objective of this paper is to compare the MRP and DDMRP for purchased part under long lead time and uncertain demand in terms of average inventory levels. The evaluation is conducted through a discrete event simulation with the long lead time and uncertain demand scenarios. The next step is evaluating the performance of DDMRP by comparing the inventory level of DDMRP with MRP. As result, DDMRP is more effective production planning than MRP in terms of average inventory levels.

  17. Waiting time distribution for the first conception leading to a live birth

    International Nuclear Information System (INIS)

    Shrestha, G.; Biswas, S.

    1985-01-01

    An attempt has been made in this paper to obtain probability model describing the distribution of the waiting time from marriage to first conception based on the data from marriage to first live birth. The speciality of this present approach lies in assuming the marital exposure to be finite which was assumed to be infinite by most of the earlier investigators for mathematical simplicity. Illustration of the applicability of the model on the data pertaining to first order of conception and monthly probability of conception for women married at different age groups have been illustrated in this paper. (author)

  18. Sleep restriction may lead to disruption in physiological attention and reaction time

    Directory of Open Access Journals (Sweden)

    Arbind Kumar Choudhary

    2016-07-01

    Full Text Available Sleepiness is the condition where for some reason fails to go into a sleep state and will have difficulty in remaining awake even while carrying out activities. Sleep restriction occurs when an individual fails to get enough sleep due to high work demands. The mechanism between sleep restriction and underlying brain physiology deficits is not well assumed. The objective of the present study was to investigate the mental attention (P300 and reaction time [visual (VRT and auditory (ART] among night watchmen, at subsequent; first (1st day, fourth (4th day and seventh (7th day of restricted sleep period. After exclusion and inclusion criteria, the study was performed among 50 watchmen (age=18–35 years (n=50 after providing written informed consent and divided into two group. Group I-(Normal sleep (n=28 working in day time and used to have normal sleep in night (≥8 h; Group II-(Restricted sleep (n=22 - working in night time and used to have less sleep in night (≤3 h. Statistical significance between the different groups was determined by the independent student ʻtʼ test and the significance level was fixed at p≤0.05. We observed that among all normal and restricted sleep watchmen there was not any significant variation in Karolinska Sleepiness Scale (KSS score, VRT and ART, along with latency and amplitude of P300 on 1st day of restricted sleep. However at subsequent on 4th day and 7th day of restricted sleep, there was significant increase in (KSSscore, and prolongation of VRT and ART as well as alteration in latency and amplitude of P300 wave in restricted sleep watchmen when compare to normal sleep watchmen. The present finding concludes that loss of sleep has major impact in dynamic change in mental attention and reaction time among watchmen employed in night shift. Professional regulations and work schedules should integrate sleep schedules before and during the work period as an essential dimension for their healthy life.

  19. Sleep restriction may lead to disruption in physiological attention and reaction time.

    Science.gov (United States)

    Choudhary, Arbind Kumar; Kishanrao, Sadawarte Sahebrao; Dadarao Dhanvijay, Anup Kumar; Alam, Tanwir

    2016-01-01

    Sleepiness is the condition where for some reason fails to go into a sleep state and will have difficulty in remaining awake even while carrying out activities. Sleep restriction occurs when an individual fails to get enough sleep due to high work demands. The mechanism between sleep restriction and underlying brain physiology deficits is not well assumed. The objective of the present study was to investigate the mental attention (P300) and reaction time [visual (VRT) and auditory (ART)] among night watchmen, at subsequent; first (1st) day, fourth (4th) day and seventh (7th) day of restricted sleep period. After exclusion and inclusion criteria, the study was performed among 50 watchmen (age=18-35 years) (n=50) after providing written informed consent and divided into two group. Group I-(Normal sleep) (n=28) working in day time and used to have normal sleep in night (≥8 h); Group II-(Restricted sleep) (n=22) - working in night time and used to have less sleep in night (≤3 h). Statistical significance between the different groups was determined by the independent student ' t ' test and the significance level was fixed at p≤0.05. We observed that among all normal and restricted sleep watchmen there was not any significant variation in Karolinska Sleepiness Scale (KSS) score, VRT and ART, along with latency and amplitude of P300 on 1st day of restricted sleep. However at subsequent on 4th day and 7th day of restricted sleep, there was significant increase in (KSS)score, and prolongation of VRT and ART as well as alteration in latency and amplitude of P300 wave in restricted sleep watchmen when compare to normal sleep watchmen. The present finding concludes that loss of sleep has major impact in dynamic change in mental attention and reaction time among watchmen employed in night shift. Professional regulations and work schedules should integrate sleep schedules before and during the work period as an essential dimension for their healthy life.

  20. Maximum a posteriori Bayesian estimation of mycophenolic Acid area under the concentration-time curve: is this clinically useful for dosage prediction yet?

    Science.gov (United States)

    Staatz, Christine E; Tett, Susan E

    2011-12-01

    This review seeks to summarize the available data about Bayesian estimation of area under the plasma concentration-time curve (AUC) and dosage prediction for mycophenolic acid (MPA) and evaluate whether sufficient evidence is available for routine use of Bayesian dosage prediction in clinical practice. A literature search identified 14 studies that assessed the predictive performance of maximum a posteriori Bayesian estimation of MPA AUC and one report that retrospectively evaluated how closely dosage recommendations based on Bayesian forecasting achieved targeted MPA exposure. Studies to date have mostly been undertaken in renal transplant recipients, with limited investigation in patients treated with MPA for autoimmune disease or haematopoietic stem cell transplantation. All of these studies have involved use of the mycophenolate mofetil (MMF) formulation of MPA, rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation. Bias associated with estimation of MPA AUC using Bayesian forecasting was generally less than 10%. However some difficulties with imprecision was evident, with values ranging from 4% to 34% (based on estimation involving two or more concentration measurements). Evaluation of whether MPA dosing decisions based on Bayesian forecasting (by the free website service https://pharmaco.chu-limoges.fr) achieved target drug exposure has only been undertaken once. When MMF dosage recommendations were applied by clinicians, a higher proportion (72-80%) of subsequent estimated MPA AUC values were within the 30-60 mg · h/L target range, compared with when dosage recommendations were not followed (only 39-57% within target range). Such findings provide evidence that Bayesian dosage prediction is clinically useful for achieving target MPA AUC. This study, however, was retrospective and focussed only on adult renal transplant recipients. Furthermore, in this study, Bayesian-generated AUC estimations and dosage predictions were not compared

  1. Tempo máximo de fonação de crianças pré-escolares Maximum phonation time in pre-school children

    Directory of Open Access Journals (Sweden)

    Carla Aparecida Cielo

    2008-08-01

    Full Text Available Pesquisas sobre o tempo máximo de fonação (TMF em crianças obtiveram diferentes resultados, constatando que tal medida pode refletir o controle neuromuscular e aerodinâmico da produção vocal, podendo ser utilizada como indicador para outras formas de avaliação, tanto qualitativas quanto objetivas. OBJETIVO: Verificar as medidas de TMF de 23 crianças pré-escolares, com idades entre quatro e seis anos e oito meses. MATERIAL E MÉTODO: O processo de amostragem contou com questionário enviado aos pais, triagem auditiva e avaliação perceptivo-auditiva vocal, por meio da escala RASAT. A coleta de dados constou dos TMF. DESENHO DO ESTUDO: Prospectivo de corte transversal. RESULTADOS: Os TMF /a/, /s/ e /z/ médios foram 7,42s, 6,35s e 7,19s; os TMF /a/ aos seis anos, foram significativamente maiores do que aos quatro anos; à medida que a idade aumentou, todos os TMF também aumentaram; e a relação s/z para todas as idades foi próxima de um. CONCLUSÕES: Os valores de TMF mostraram-se superiores aos verificados em pesquisas nacionais e inferiores aos verificados em pesquisa internacionais. Além disso, pode-se concluir que as faixas etárias analisadas no presente estudo encontram-se num período de maturação nervosa e muscular, sendo a imaturidade mais evidente na faixa etária dos quatro anos.Past studies on the maximum phonation time (MPT in children have shown different results in duration. This factor may reflect the neuromuscular and aerodynamic control of phonation in patients; such control might be used as an indicator of other evaluation methods on a qualitative and quantitative basis. AIM: to verify measures of MPT and voice acoustic characteristics in 23 children aged four to six year and eight months. METHOD: The sampling process comprised a questionnaire that was sent to parents, followed by auditory screening and a voice perceptive-auditory assessment based on the R.A.S.A.T. scale. Data collection included the MPT. STUDY

  2. Lead pellet retention time and associated toxicity in northern bobwhite quail (Colinus virginianus).

    Science.gov (United States)

    Kerr, Richard; Holladay, Steven; Jarrett, Timothy; Selcer, Barbara; Meldrum, Blair; Williams, Susan; Tannenbaum, Lawrence; Holladay, Jeremy; Williams, Jamie; Gogal, Robert

    2010-12-01

    Birds are exposed to Pb by oral ingestion of spent Pb shot as grit. A paucity of data exists for retention and clearance of these particles in the bird gastrointestinal tract. In the current study, northern bobwhite quail (Colinus virginianus) were orally gavaged with 1, 5, or 10 Pb shot pellets, of 2-mm diameter, and radiographically followed over time. Blood Pb levels and other measures of toxicity were collected, to correlate with pellet retention. Quail dosed with either 5 or 10 pellets exhibited morbidity between weeks 1 and 2 and were removed from further study. Most of the Pb pellets were absorbed or excreted within 14 d of gavage, independent of dose. Pellet size in the ventriculus decreased over time in radiographs, suggesting dissolution caused by the acidic pH. Birds dosed with one pellet showed mean blood Pb levels that exceeded 1,300 µg/dl at week 1, further supporting dissolution in the gastrointestinal tract. Limited signs of toxicity were seen in the one-pellet birds; however, plasma δ-aminolevulinic acid dehydratase (d-ALAD) activity was persistently depressed, suggesting possible impaired hematological function. © 2010 SETAC.

  3. Permutation entropy based time series analysis: Equalities in the input signal can lead to false conclusions

    Energy Technology Data Exchange (ETDEWEB)

    Zunino, Luciano, E-mail: lucianoz@ciop.unlp.edu.ar [Centro de Investigaciones Ópticas (CONICET La Plata – CIC), C.C. 3, 1897 Gonnet (Argentina); Departamento de Ciencias Básicas, Facultad de Ingeniería, Universidad Nacional de La Plata (UNLP), 1900 La Plata (Argentina); Olivares, Felipe, E-mail: olivaresfe@gmail.com [Instituto de Física, Pontificia Universidad Católica de Valparaíso (PUCV), 23-40025 Valparaíso (Chile); Scholkmann, Felix, E-mail: Felix.Scholkmann@gmail.com [Research Office for Complex Physical and Biological Systems (ROCoS), Mutschellenstr. 179, 8038 Zurich (Switzerland); Biomedical Optics Research Laboratory, Department of Neonatology, University Hospital Zurich, University of Zurich, 8091 Zurich (Switzerland); Rosso, Osvaldo A., E-mail: oarosso@gmail.com [Instituto de Física, Universidade Federal de Alagoas (UFAL), BR 104 Norte km 97, 57072-970, Maceió, Alagoas (Brazil); Instituto Tecnológico de Buenos Aires (ITBA) and CONICET, C1106ACD, Av. Eduardo Madero 399, Ciudad Autónoma de Buenos Aires (Argentina); Complex Systems Group, Facultad de Ingeniería y Ciencias Aplicadas, Universidad de los Andes, Av. Mons. Álvaro del Portillo 12.455, Las Condes, Santiago (Chile)

    2017-06-15

    A symbolic encoding scheme, based on the ordinal relation between the amplitude of neighboring values of a given data sequence, should be implemented before estimating the permutation entropy. Consequently, equalities in the analyzed signal, i.e. repeated equal values, deserve special attention and treatment. In this work, we carefully study the effect that the presence of equalities has on permutation entropy estimated values when these ties are symbolized, as it is commonly done, according to their order of appearance. On the one hand, the analysis of computer-generated time series is initially developed to understand the incidence of repeated values on permutation entropy estimations in controlled scenarios. The presence of temporal correlations is erroneously concluded when true pseudorandom time series with low amplitude resolutions are considered. On the other hand, the analysis of real-world data is included to illustrate how the presence of a significant number of equal values can give rise to false conclusions regarding the underlying temporal structures in practical contexts. - Highlights: • Impact of repeated values in a signal when estimating permutation entropy is studied. • Numerical and experimental tests are included for characterizing this limitation. • Non-negligible temporal correlations can be spuriously concluded by repeated values. • Data digitized with low amplitude resolutions could be especially affected. • Analysis with shuffled realizations can help to overcome this limitation.

  4. Why shorter half-times of repair lead to greater damage in pulsed brachytherapy

    International Nuclear Information System (INIS)

    Fowler, J.F.

    1993-01-01

    Pulsed brachytherapy consists of replacing continuous irradiation at low dose-rate with a series of medium dose-rate fractions in the same overall time and to the same total dose. For example, pulses of 1 Gy given every 2 hr or 2 Gy given every 4 hr would deliver the same 70 Gy in 140 hr as continuous irradiation at 0.5 Gy/hr. If higher dose-rates are used, even with gaps between the pulses, the biological effects are always greater. Provided that dose rates in the pulse do not exceed 3 Gy/hr, and provided that pulses are given as often as every 2 hr, the inevitable increases of biological effect are no larger than a few percent (of biologically effective dose or extrapolated response dose). However, these increases are more likely to exceed 10% (and thus become clinically significant) if the half-time of repair of sublethal damage is short (less than 1 hr) rather than long. This somewhat unexpected finding is explained in detail here. The rise and fall of Biologically Effective Dose (and hence of Relative Effectiveness, for a constant dose in each pulse) is calculated during and after single pulses, assuming a range of values of T 1/2 , the half-time of sublethal damage repair. The area under each curve is proportional to Biologically Effective Dose and therefore to log cell kill. Pulses at 3 Gy/hr do yield greater biological effect (dose x integrated Relative Effectiveness) than lower dose-rate pulses or continuous irradiation at 0.5 Gy/hr. The contrast is greater for the short T 1/2 of 0.5 hr than for the longer T 1/2 of 1.5 hr. More biological damage will be done (compared with traditional low dose rate brachytherapy) in tissues with short T 1/2 (0.1-1 hr) than in tissues with longer T 1/2 values. 8 refs., 3 figs

  5. Copper, nickel and lead in lichen and tree bark transplants over different periods of time

    Energy Technology Data Exchange (ETDEWEB)

    Baptista, Mafalda S. [CIIMAR, Rua dos Bragas, 289, 4050-123 Porto (Portugal)], E-mail: abaptista@fc.up.pt; Vasconcelos, M. Teresa S.D. [CIIMAR, Rua dos Bragas, 289, 4050-123 Porto (Portugal); Chemistry Department, Faculty of Sciences, University of Porto, Rua do Campo Alegre, 687, 4169-071 Porto (Portugal)], E-mail: mtvascon@fc.up.pt; Cabral, Joao Paulo [CIIMAR, Rua dos Bragas, 289, 4050-123 Porto (Portugal); Botany Department, Faculty of Sciences, University of Porto, Rua do Campo Alegre, 1191, 4150-181 Porto (Portugal)], E-mail: jpcabral@fc.up.pt; Freitas, M. Carmo [ITN - Technological and Nuclear Institute, Reactor E.N. 10, 2686-953 Sacavem (Portugal)], E-mail: cfreitas@itn.mcies.pt; Pacheco, Adriano M.G. [CVRM-IST - Technical University of Lisbon, Avenida Rovisco Pais, 1, 1049-001 Lisbon (Portugal)], E-mail: apacheco@ist.utl.pt

    2008-01-15

    This work aimed at comparing the dynamics of atmospheric metal accumulation by the lichen Flavoparmelia caperata and bark of Platanus hybrida over different periods of time. Transplants were exposed in three Portuguese coastal cities. Samples were retrieved (1) every 2 months (discontinuous exposure), or (2) after 2-, 4-, 6-, 8- and 10-month periods (continuous exposure), and analysed for Cu, Ni and Pb. Airborne accumulation of metals was essentially independent of climatic factors. For both biomonitors [Pb] > [Ni] > [Cu] but Pb was the only element for which a consistent pattern of accumulation was observed, with the bark outperforming the lichen. The longest exposure periods hardly ever corresponded to the highest accumulation. This might have been partly because the biomonitors bound and released metals throughout the exposure, each with its own dynamics of accumulation, but both according to the environmental metal availability. - Lichen and tree bark have distinct dynamics of airborne metal accumulation.

  6. Copper, nickel and lead in lichen and tree bark transplants over different periods of time

    International Nuclear Information System (INIS)

    Baptista, Mafalda S.; Vasconcelos, M. Teresa S.D.; Cabral, Joao Paulo; Freitas, M. Carmo; Pacheco, Adriano M.G.

    2008-01-01

    This work aimed at comparing the dynamics of atmospheric metal accumulation by the lichen Flavoparmelia caperata and bark of Platanus hybrida over different periods of time. Transplants were exposed in three Portuguese coastal cities. Samples were retrieved (1) every 2 months (discontinuous exposure), or (2) after 2-, 4-, 6-, 8- and 10-month periods (continuous exposure), and analysed for Cu, Ni and Pb. Airborne accumulation of metals was essentially independent of climatic factors. For both biomonitors [Pb] > [Ni] > [Cu] but Pb was the only element for which a consistent pattern of accumulation was observed, with the bark outperforming the lichen. The longest exposure periods hardly ever corresponded to the highest accumulation. This might have been partly because the biomonitors bound and released metals throughout the exposure, each with its own dynamics of accumulation, but both according to the environmental metal availability. - Lichen and tree bark have distinct dynamics of airborne metal accumulation

  7. Effects of multidisciplinary teamwork on lead times and patient flow in the emergency department: a longitudinal interventional cohort study.

    Science.gov (United States)

    Muntlin Athlin, Asa; von Thiele Schwarz, Ulrica; Farrohknia, Nasim

    2013-11-01

    Long waiting times for emergency care are claimed to be caused by overcrowded emergency departments and non-effective working routines. Teamwork has been suggested as a promising solution to these issues. The aim of the present study was to investigate the effects of teamwork in a Swedish emergency department on lead times and patient flow. The study was set in an emergency department of a university hospital where teamwork, a multi-professional team responsible for the whole care process for a group of patients, was introduced. The study has a longitudinal non-randomized intervention study design. Data were collected for five two-week periods during a period of 1.5 years. The first part of the data collection used an ABAB design whereby standard procedure (A) was altered weekly with teamwork (B). Then, three follow-ups were conducted. At last follow-up, teamwork was permanently implemented. The outcome measures were: number of patients handled within teamwork time, time to physician, total visit time and number of patients handled within the 4-hour target. A total of 1,838 patient visits were studied. The effect on lead times was only evident at the last follow-up. Findings showed that the number of patients handled within teamwork time was almost equal between the different study periods. At the last follow-up, the median time to physician was significantly decreased by 11 minutes (p = 0.0005) compared to the control phase and the total visit time was significantly shorter at last follow-up compared to control phase (p = Teamwork seems to contribute to the quality improvement of emergency care in terms of small but significant decreases in lead times. However, although efficient work processes such as teamwork are necessary to ensure safe patient care, it is likely not sufficient for bringing about larger decreases in lead times or for meeting the 4-hour target in the emergency department.

  8. Open source and healthcare in Europe - time to put leading edge ideas into practice.

    Science.gov (United States)

    Murray, Peter J; Wright, Graham; Karopka, Thomas; Betts, Helen; Orel, Andrej

    2009-01-01

    Free/Libre and Open Source Software (FLOSS) is a process of software development, a method of licensing and a philosophy. Although FLOSS plays a significant role in several market areas, the impact in the health care arena is still limited. FLOSS is promoted as one of the most effective means for overcoming fragmentation in the health care sector and providing a basis for more efficient, timely and cost effective health care provision. The 2008 European Federation for Medical Informatics (EFMI) Special Topic Conference (STC) explored a range of current and future issues related to FLOSS in healthcare (FLOSS-HC). In particular, there was a focus on health records, ubiquitous computing, knowledge sharing, and current and future applications. Discussions resulted in a list of main barriers and challenges for use of FLOSS-HC. Based on the outputs of this event, the 2004 Open Steps events and subsequent workshops at OSEHC2009 and Med-e-Tel 2009, a four-step strategy has been proposed for FLOSS-HC: 1) a FLOSS-HC inventory; 2) a FLOSS-HC collaboration platform, use case database and knowledge base; 3) a worldwide FLOSS-HC network; and 4) FLOSS-HC dissemination activities. The workshop will further refine this strategy and elaborate avenues for FLOSS-HC from scientific, business and end-user perspectives. To gain acceptance by different stakeholders in the health care industry, different activities have to be conducted in collaboration. The workshop will focus on the scientific challenges in developing methodologies and criteria to support FLOSS-HC in becoming a viable alternative to commercial and proprietary software development and deployment.

  9. An integrated supply chain inventory model with imperfect-quality items, controllable lead time and distribution-free demand

    Directory of Open Access Journals (Sweden)

    Lin Hsien-Jen

    2013-01-01

    Full Text Available In this paper, we consider an integrated vendor-buyer inventory policy for a continuous review model with a random number of defective items and screening process gradually at a fixed screening rate in buyer’s arriving order lot. We assume that shortages are allowed and partially backlogged on the buyer’s side, and that the lead time demand distribution is unknown, except its first two moments. The objective is to apply the minmax distribution free approach to determine the optimal order quantity, reorder point, lead time and the number of lots delivered in one production run simultaneously so that the expected total system cost is minimized. Numerical experiments along with sensitivity analysis were performed to illustrate the effects of parameters on the decision and the total system cost.

  10. A single-vendor and a single-buyer integrated inventory model with ordering cost reduction dependent on lead time

    Science.gov (United States)

    Vijayashree, M.; Uthayakumar, R.

    2017-09-01

    Lead time is one of the major limits that affect planning at every stage of the supply chain system. In this paper, we study a continuous review inventory model. This paper investigates the ordering cost reductions are dependent on lead time. This study addressed two-echelon supply chain problem consisting of a single vendor and a single buyer. The main contribution of this study is that the integrated total cost of the single vendor and the single buyer integrated system is analyzed by adopting two different (linear and logarithmic) types ordering cost reductions act dependent on lead time. In both cases, we develop effective solution procedures for finding the optimal solution and then illustrative numerical examples are given to illustrate the results. The solution procedure is to determine the optimal solutions of order quantity, ordering cost, lead time and the number of deliveries from the single vendor and the single buyer in one production run, so that the integrated total cost incurred has the minimum value. Ordering cost reduction is the main aspect of the proposed model. A numerical example is given to validate the model. Numerical example solved by using Matlab software. The mathematical model is solved analytically by minimizing the integrated total cost. Furthermore, the sensitivity analysis is included and the numerical examples are given to illustrate the results. The results obtained in this paper are illustrated with the help of numerical examples. The sensitivity of the proposed model has been checked with respect to the various major parameters of the system. Results reveal that the proposed integrated inventory model is more applicable for the supply chain manufacturing system. For each case, an algorithm procedure of finding the optimal solution is developed. Finally, the graphical representation is presented to illustrate the proposed model and also include the computer flowchart in each model.

  11. Interpretations of systematic errors in the NCEP Climate Forecast System at lead times of 2, 4, 8, ..., 256 days

    Directory of Open Access Journals (Sweden)

    Siwon Song

    2012-09-01

    Full Text Available The climatology of mean bias errors (relative to 1-day forecasts was examined in a 20-year hindcast set from version 1 of the Climate Forecast System (CFS, for forecast lead times of 2, 4, 8, 16, ... 256 days, verifying in different seasons. Results mostly confirm the simple expectation that atmospheric model biases should be evident at short lead (2–4 days, while soil moisture errors develop over days-weeks and ocean errors emerge over months. A further simplification is also evident: surface temperature bias patterns have nearly fixed geographical structure, growing with different time scales over land and ocean. The geographical pattern has mostly warm and dry biases over land and cool bias over the oceans, with two main exceptions: (1 deficient stratocumulus clouds cause warm biases in eastern subtropical oceans, and (2 high latitude land is too cold in boreal winter. Further study of the east Pacific cold tongue-Intertropical Convergence Zone (ITCZ complex shows a possible interaction between a rapidly-expressed atmospheric model bias (poleward shift of deep convection beginning at day 2 and slow ocean dynamics (erroneously cold upwelling along the equator in leads > 1 month. Further study of the high latitude land cold bias shows that it is a thermal wind balance aspect of the deep polar vortex, not just a near-surface temperature error under the wintertime inversion, suggesting that its development time scale of weeks to months may involve long timescale processes in the atmosphere, not necessarily in the land model. Winter zonal wind errors are small in magnitude, but a refractive index map shows that this can cause modest errors in Rossby wave ducting. Finally, as a counterpoint to our initial expectations about error growth, a case of non-monotonic error growth is shown: velocity potential bias grows with lead on a time scale of weeks, then decays over months. It is hypothesized that compensations between land and ocean errors may

  12. Estimate of overdiagnosis of breast cancer due to mammography after adjustment for lead time. A service screening study in Italy

    Science.gov (United States)

    Paci, Eugenio; Miccinesi, Guido; Puliti, Donella; Baldazzi, Paola; De Lisi, Vincenzo; Falcini, Fabio; Cirilli, Claudia; Ferretti, Stefano; Mangone, Lucia; Finarelli, Alba Carola; Rosso, Stefano; Segnan, Nereo; Stracci, Fabrizio; Traina, Adele; Tumino, Rosario; Zorzi, Manuel

    2006-01-01

    Introduction Excess of incidence rates is the expected consequence of service screening. The aim of this paper is to estimate the quota attributable to overdiagnosis in the breast cancer screening programmes in Northern and Central Italy. Methods All patients with breast cancer diagnosed between 50 and 74 years who were resident in screening areas in the six years before and five years after the start of the screening programme were included. We calculated a corrected-for-lead-time number of observed cases for each calendar year. The number of observed incident cases was reduced by the number of screen-detected cases in that year and incremented by the estimated number of screen-detected cases that would have arisen clinically in that year. Results In total we included 13,519 and 13,999 breast cancer cases diagnosed in the pre-screening and screening years, respectively. In total, the excess ratio of observed to predicted in situ and invasive cases was 36.2%. After correction for lead time the excess ratio was 4.6% (95% confidence interval 2 to 7%) and for invasive cases only it was 3.2% (95% confidence interval 1 to 6%). Conclusion The remaining excess of cancers after individual correction for lead time was lower than 5%. PMID:17147789

  13. Modeling and simulation of blast-induced, early-time intracranial wave physics leading to traumatic brain injury.

    Energy Technology Data Exchange (ETDEWEB)

    Ford, Corey C. (University of New Mexico, Albuquerque, NM); Taylor, Paul Allen

    2008-02-01

    The objective of this modeling and simulation study was to establish the role of stress wave interactions in the genesis of traumatic brain injury (TBI) from exposure to explosive blast. A high resolution (1 mm{sup 3} voxels), 5 material model of the human head was created by segmentation of color cryosections from the Visible Human Female dataset. Tissue material properties were assigned from literature values. The model was inserted into the shock physics wave code, CTH, and subjected to a simulated blast wave of 1.3 MPa (13 bars) peak pressure from anterior, posterior and lateral directions. Three dimensional plots of maximum pressure, volumetric tension, and deviatoric (shear) stress demonstrated significant differences related to the incident blast geometry. In particular, the calculations revealed focal brain regions of elevated pressure and deviatoric (shear) stress within the first 2 milliseconds of blast exposure. Calculated maximum levels of 15 KPa deviatoric, 3.3 MPa pressure, and 0.8 MPa volumetric tension were observed before the onset of significant head accelerations. Over a 2 msec time course, the head model moved only 1 mm in response to the blast loading. Doubling the blast strength changed the resulting intracranial stress magnitudes but not their distribution. We conclude that stress localization, due to early time wave interactions, may contribute to the development of multifocal axonal injury underlying TBI. We propose that a contribution to traumatic brain injury from blast exposure, and most likely blunt impact, can occur on a time scale shorter than previous model predictions and before the onset of linear or rotational accelerations traditionally associated with the development of TBI.

  14. Lead-time reduction utilizing lean tools applied to healthcare: the inpatient pharmacy at a local hospital.

    Science.gov (United States)

    Al-Araidah, Omar; Momani, Amer; Khasawneh, Mohammad; Momani, Mohammed

    2010-01-01

    The healthcare arena, much like the manufacturing industry, benefits from many aspects of the Toyota lean principles. Lean thinking contributes to reducing or eliminating nonvalue-added time, money, and energy in healthcare. In this paper, we apply selected principles of lean management aiming at reducing the wasted time associated with drug dispensing at an inpatient pharmacy at a local hospital. Thorough investigation of the drug dispensing process revealed unnecessary complexities that contribute to delays in delivering medications to patients. We utilize DMAIC (Define, Measure, Analyze, Improve, Control) and 5S (Sort, Set-in-order, Shine, Standardize, Sustain) principles to identify and reduce wastes that contribute to increasing the lead-time in healthcare operations at the pharmacy understudy. The results obtained from the study revealed potential savings of > 45% in the drug dispensing cycle time.

  15. Timing of Conduction Abnormalities Leading to Permanent Pacemaker Insertion After Transcatheter Aortic Valve Implantation-A Single-Centre Review.

    Science.gov (United States)

    Ozier, Daniel; Zivkovic, Nevena; Elbaz-Greener, Gabby; Singh, Sheldon M; Wijeysundera, Harindra C

    2017-12-01

    Transcatheter aortic valve implantation (TAVI) is the preferred alternative to traditional surgical aortic valve replacement; however, it remains expensive. One potential driver of cost is the need for postprocedural monitoring for conduction abnormalities after TAVI. Given the paucity of literature on the optimal length of monitoring, we aimed to determine when clinically significant conduction abnormalities leading to permanent pacemaker (PPM) insertion after TAVI were first identified. We identified all patients in the Sunnybrook Health Sciences Centre TAVI registry (Toronto, Canada) who underwent TAVI between 2009 and 2016, excluding those with pre-existing PPMs or those who underwent emergency open heart surgery. Through dedicated chart review, the timing and type of conduction abnormalities leading to PPM were recorded. Patients were divided according to the timing of conduction abnormality: during the procedure vs after the procedure. The overall PPM insertion rate was 15.6% (80 of 512 cases), with all but 1 patient receiving a PPM for class I indications. PPMs were inserted for complete heart block/high-grade atrioventricular block (91.3%), severe sinus node dysfunction (3.8%), and alternating bundle branch block (3.8%). Of these conduction abnormalities, 55.0% occurred during the procedure (intraprocedure; n = 44 patients). The mean time to the development of a conduction abnormality necessitating PPM was 1.2 days (interquartile range, 0-2 days), with 88.8% occurring within 72 hours of the procedure (n = 71 patients). In the entire TAVI cohort, leading to PPM. The majority of conduction abnormalities leading to PPM insertion after TAVI occur in the very early periprocedural period, suggesting that early mobilization and discharge will be safe from a conduction standpoint. Copyright © 2017 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  16. Real-Time 12-Lead High-Frequency QRS Electrocardiography for Enhanced Detection of Myocardial Ischemia and Coronary Artery Disease

    Science.gov (United States)

    Schlegel, Todd T.; Kulecz, Walter B.; DePalma, Jude L.; Feiveson, Alan H.; Wilson, John S.; Rahman, M. Atiar; Bungo, Michael W.

    2004-01-01

    Several studies have shown that diminution of the high-frequency (HF; 150-250 Hz) components present within the central portion of the QRS complex of an electrocardiogram (ECG) is a more sensitive indicator for the presence of myocardial ischemia than are changes in the ST segments of the conventional low-frequency ECG. However, until now, no device has been capable of displaying, in real time on a beat-to-beat basis, changes in these HF QRS ECG components in a continuously monitored patient. Although several software programs have been designed to acquire the HF components over the entire QRS interval, such programs have involved laborious off-line calculations and postprocessing, limiting their clinical utility. We describe a personal computer-based ECG software program developed recently at the National Aeronautics and Space Administration (NASA) that acquires, analyzes, and displays HF QRS components in each of the 12 conventional ECG leads in real time. The system also updates these signals and their related derived parameters in real time on a beat-to-beat basis for any chosen monitoring period and simultaneously displays the diagnostic information from the conventional (low-frequency) 12-lead ECG. The real-time NASA HF QRS ECG software is being evaluated currently in multiple clinical settings in North America. We describe its potential usefulness in the diagnosis of myocardial ischemia and coronary artery disease.

  17. A maximum entropy method to compute the 13NH3 pulmonary transit time from right to left ventricle in cardiac PET studies

    DEFF Research Database (Denmark)

    Steenstrup, Stig; Hove, Jens D; Kofoed, Klaus

    2002-01-01

    The distribution function of pulmonary transit times (fPTTs) contains information on the transit time of blood through the lungs and the dispersion in transit times. Most of the previous studies have used specific functional forms with adjustable parameters to characterize the fPTT. It is the pur......, we were able to accurately identify a two-peaked transfer function, which may theoretically be seen in patients with pulmonary disease confined to one lung. Transit time values for [13N]-ammonia were produced by applying the algorithm to PET studies from normal volunteers....

  18. Properties of three-body decay functions derived with time-like jet calculus beyond leading order

    International Nuclear Information System (INIS)

    Sugiura, Tetsuya

    2002-01-01

    Three-body decay functions in time-like parton branching are calculated using the jet calculus to the next-to-leading logarithmic (NLL) order in perturbative quantum chromodynamics (QCD). The phase space contributions from each of the ladder diagrams and interference diagrams are presented. We correct part of the results for the three-body decay functions calculated previously by two groups. Employing our new results, the properties of the three-body decay functions in the regions of soft partons are examined numerically. Furthermore, we examine the contribution of the three-body decay functions modified by the restriction resulting from the kinematical boundary of the phase space for two-body decay in the parton shower model. This restriction leads to some problems for the parton shower model. For this reason, we propose a new restriction introduced by the kinematical boundary of the phase space for two-body decay. (author)

  19. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  20. The dynamics of suspended particulate matter (SPM) and chlorophyll-a from intratidal to annual time scales in a coastal turbidity maximum

    Science.gov (United States)

    van der Hout, C. M.; Witbaard, R.; Bergman, M. J. N.; Duineveld, G. C. A.; Rozemeijer, M. J. C.; Gerkema, T.

    2017-09-01

    The analysis of 1.8 years of data gives an understanding of the response to varying forcing of suspended particulate matter (SPM) and chlorophyll-a (CHL-a) in a coastal turbidity maximum zone (TMZ). Both temporal and vertical concentration variations in the near-bed layer (0-2 m) in the shallow (11 m deep) coastal zone at 1 km off the Dutch coast are shown. Temporal variations in the concentration of both parameters are found on tidal and seasonal scales, and a marked response to episodic events (e.g. storms). The seasonal cycle in the near-bed CHL-a concentration is determined by the spring bloom. The role of the wave climate as the primary forcing in the SPM seasonal cycle is discussed. The tidal current provides a background signal, generated predominantly by local resuspension and settling and a minor role is for advection in the cross-shore and the alongshore direction. We tested the logarithmic Rouse profile to the vertical profiles of both the SPM and the CHL-a data, with respectively 84% and only 2% success. The resulting large percentage of low Rouse numbers for the SPM profiles suggest a mixed suspension is dominant in the TMZ, i.e. surface SPM concentrations are in the same order of magnitude as near-bed concentrations.

  1. Responsive and resilient supply chain network design under operational and disruption risks with delivery lead-time sensitive customers

    DEFF Research Database (Denmark)

    Fattahi, Mohammad; Govindan, Kannan; Keyvanshokooh, Esmaeil

    2017-01-01

    We address a multi-period supply chain (SC) network design where demands of customers depend on facilities serving them based on their delivery lead-times. Potential customer demands are stochastic, and facilities’ capacity varies randomly because of possible disruptions. Accordingly, we develop...... a multi-stage stochastic program, and model disruptions’ effect on facilities’ capacity. The SC responsiveness risk is limited and, to obtain a resilient network, both mitigation and contingency strategies are exploited. Computational results on a real-life case study and randomly generated problem...... instances demonstrate the model's applicability, risk-measurement policies’ performance, and the influence of mitigation and contingency strategies on SC's resiliency....

  2. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  3. Water quality criteria for lead

    Energy Technology Data Exchange (ETDEWEB)

    Nagpal, N.K.

    1987-01-01

    This report is one in a series that establishes water quality criteria for British Columbia. The report sets criteria for lead to protect a number of water uses, including drinking water, freshwater and marine aquatic life, wildlife, livestock, irrigation, and recreation. The criteria are set as either maximum concentrations of total lead that should not be exceeded at any time, or average concentrations that should not be exceeded over a 30-day period. Actual values are summarized.

  4. Multi-Train Energy Saving for Maximum Usage of Regenerative Energy by Dwell Time Optimization in Urban Rail Transit Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Fei Lin

    2016-03-01

    Full Text Available With its large capacity, the total urban rail transit energy consumption is very high; thus, energy saving operations are quite meaningful. The effective use of regenerative braking energy is the mainstream method for improving the efficiency of energy saving. This paper examines the optimization of train dwell time and builds a multiple train operation model for energy conservation of a power supply system. By changing the dwell time, the braking energy can be absorbed and utilized by other traction trains as efficiently as possible. The application of genetic algorithms is proposed for the optimization, based on the current schedule. Next, to validate the correctness and effectiveness of the optimization, a real case is studied. Actual data from the Beijing subway Yizhuang Line are employed to perform the simulation, and the results indicate that the optimization method of the dwell time is effective.

  5. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  6. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  7. An increased rectal maximum tolerable volume and long anal canal are associated with poor short-term response to biofeedback therapy for patients with anismus with decreased bowel frequency and normal colonic transit time.

    Science.gov (United States)

    Rhee, P L; Choi, M S; Kim, Y H; Son, H J; Kim, J J; Koh, K C; Paik, S W; Rhee, J C; Choi, K W

    2000-10-01

    Biofeedback is an effective therapy for a majority of patients with anismus. However, a significant proportion of patients still failed to respond to biofeedback, and little has been known about the factors that predict response to biofeedback. We evaluated the factors associated with poor response to biofeedback. Biofeedback therapy was offered to 45 patients with anismus with decreased bowel frequency (less than three times per week) and normal colonic transit time. Any differences in demographics, symptoms, and parameters of anorectal physiologic tests were sought between responders (in whom bowel frequency increased up to three times or more per week after biofeedback) and nonresponders (in whom bowel frequency remained less than three times per week). Thirty-one patients (68.9 percent) responded to biofeedback and 14 patients (31.1 percent) did not. Anal canal length was longer in nonresponders than in responders (4.53 +/- 0.5 vs. 4.08 +/- 0.56 cm; P = 0.02), and rectal maximum tolerable volume was larger in nonresponders than in responders. (361 +/- 87 vs. 302 +/- 69 ml; P = 0.02). Anal canal length and rectal maximum tolerable volume showed significant differences between responders and nonresponders on multivariate analysis (P = 0.027 and P = 0.034, respectively). This study showed that a long anal canal and increased rectal maximum tolerable volume are associated with poor short-term response to biofeedback for patients with anismus with decreased bowel frequency and normal colonic transit time.

  8. A geometric process model for M/PH(M/PH)/1/K queue with new service machine procurement lead time

    Science.gov (United States)

    Yu, Miaomiao; Tang, Yinghui; Fu, Yonghong

    2013-06-01

    In this article, we consider a geometric process model for M/PH(M/PH)/1/K queue with new service machine procurement lead time. A maintenance policy (N - 1, N) based on the number of failures of the service machine is introduced into the system. Assuming that a failed service machine after repair will not be 'as good as new', and the spare service machine for replacement is only available by an order. More specifically, we suppose that the procurement lead time for delivering the spare service machine follows a phase-type (PH) distribution. Under such assumptions, we apply the matrix-analytic method to develop the steady state probabilities of the system, and then we obtain some system performance measures. Finally, employing an important Lemma, the explicit expression of the long-run average cost rate for the service machine is derived, and the direct search method is also implemented to determine the optimal value of N for minimising the average cost rate.

  9. Proposal of lead time reduction in the thermoelectric products line of a small company in the State of Sao Paulo

    Directory of Open Access Journals (Sweden)

    Fernanda Veríssimo Soulé

    2016-03-01

    Full Text Available Quick Response Manufacturing (QRM is a way manufacturing companies may increase their flexibility. Manufacturing flexibility is a key to differentiation and enhancement of competitiveness. There is few empirical research on the topic of how small and medium-sized enterprises (SME may benefit from QRM, what may impact the appropriation of this approach by these important actors of our economy. This article aims to present the results of a project which applied QRM to reduce the lead time of a small company located in the state of Sao Paulo. It was proposed to a balance the throughputs of slow operations, reducing 50% of production batches; b implement cellular manufacturing and improvements in the management of Work In Process, using the POLCA system and visual management; c implement an integrated sales and operations planning (S&OP and rules for prioritization of orders. It was identified that the proposal would generate a lead time reduction from 39 to 21.3 days and a decrease of at least 51% in the raw materials stock costs. During the research, the following conclusions could be drawn: a problems in management, investment capacity and relationship with suppliers are frequent in family-owned SMEs; b QRM approach can be adapted to work within this environment; c the knowledge developed in academia can be an important tool to help family-owned SMEs to supplant these obstacles.

  10. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  11. Probabilistic inference under time pressure leads to a cortical-to-subcortical shift in decision evidence integration.

    Science.gov (United States)

    Oh-Descher, Hanna; Beck, Jeffrey M; Ferrari, Silvia; Sommer, Marc A; Egner, Tobias

    2017-11-15

    Real-life decision-making often involves combining multiple probabilistic sources of information under finite time and cognitive resources. To mitigate these pressures, people "satisfice", foregoing a full evaluation of all available evidence to focus on a subset of cues that allow for fast and "good-enough" decisions. Although this form of decision-making likely mediates many of our everyday choices, very little is known about the way in which the neural encoding of cue information changes when we satisfice under time pressure. Here, we combined human functional magnetic resonance imaging (fMRI) with a probabilistic classification task to characterize neural substrates of multi-cue decision-making under low (1500 ms) and high (500 ms) time pressure. Using variational Bayesian inference, we analyzed participants' choices to track and quantify cue usage under each experimental condition, which was then applied to model the fMRI data. Under low time pressure, participants performed near-optimally, appropriately integrating all available cues to guide choices. Both cortical (prefrontal and parietal cortex) and subcortical (hippocampal and striatal) regions encoded individual cue weights, and activity linearly tracked trial-by-trial variations in the amount of evidence and decision uncertainty. Under increased time pressure, participants adaptively shifted to using a satisficing strategy by discounting the least informative cue in their decision process. This strategic change in decision-making was associated with an increased involvement of the dopaminergic midbrain, striatum, thalamus, and cerebellum in representing and integrating cue values. We conclude that satisficing the probabilistic inference process under time pressure leads to a cortical-to-subcortical shift in the neural drivers of decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Solar Atmosphere to Earth's Surface: Long Lead Time dB/dt Predictions with the Space Weather Modeling Framework

    Science.gov (United States)

    Welling, D. T.; Manchester, W.; Savani, N.; Sokolov, I.; van der Holst, B.; Jin, M.; Toth, G.; Liemohn, M. W.; Gombosi, T. I.

    2017-12-01

    The future of space weather prediction depends on the community's ability to predict L1 values from observations of the solar atmosphere, which can yield hours of lead time. While both empirical and physics-based L1 forecast methods exist, it is not yet known if this nascent capability can translate to skilled dB/dt forecasts at the Earth's surface. This paper shows results for the first forecast-quality, solar-atmosphere-to-Earth's-surface dB/dt predictions. Two methods are used to predict solar wind and IMF conditions at L1 for several real-world coronal mass ejection events. The first method is an empirical and observationally based system to estimate the plasma characteristics. The magnetic field predictions are based on the Bz4Cast system which assumes that the CME has a cylindrical flux rope geometry locally around Earth's trajectory. The remaining plasma parameters of density, temperature and velocity are estimated from white-light coronagraphs via a variety of triangulation methods and forward based modelling. The second is a first-principles-based approach that combines the Eruptive Event Generator using Gibson-Low configuration (EEGGL) model with the Alfven Wave Solar Model (AWSoM). EEGGL specifies parameters for the Gibson-Low flux rope such that it erupts, driving a CME in the coronal model that reproduces coronagraph observations and propagates to 1AU. The resulting solar wind predictions are used to drive the operational Space Weather Modeling Framework (SWMF) for geospace. Following the configuration used by NOAA's Space Weather Prediction Center, this setup couples the BATS-R-US global magnetohydromagnetic model to the Rice Convection Model (RCM) ring current model and a height-integrated ionosphere electrodynamics model. The long lead time predictions of dB/dt are compared to model results that are driven by L1 solar wind observations. Both are compared to real-world observations from surface magnetometers at a variety of geomagnetic latitudes

  13. Evaluation of Integrated Time-Temperature Effect in Pyrolysis Process of Historically Contaminated Soils with Cadmium (Cd and Lead (Pb

    Directory of Open Access Journals (Sweden)

    Bulmău C

    2013-04-01

    Full Text Available It is already known that heavy metals pollution causes important concern to human and ecosystem health. Heavy metals in soils at the European level represents 37.3% between main contaminates affecting soils (EEA, 2007. This paper illustrates results obtained in the framework of laboratory experiments concerning the evaluation of integrated time-temperature effect in pyrolysis process applied to contaminated soil by two different ways: it is about heavy metals historically contaminated soil from one of the most polluted areas within Romania, and artificially contaminated with PCB-containing transformer oil. In particular, the authors focused on a recent evaluation of pyrolysis efficiency on removing lead (Pb and cadmium (Cd from the contaminated soil. The experimental study evaluated two important parameters related to the studied remediation methodology: thermal process temperature and the retention time in reactor of the contaminated soils. The remediation treatments were performed in a rotary kiln reactor, taking into account three process temperatures (400°C, 600°C and 800°C and two retention times: 30 min. and 60 min. Completed analyses have focused on pyrolysis solids and gas products. Consequently, both ash and gas obtained after pyrolysis process were subjected to chemical analyses.

  14. Role of sintering time, crystalline phases and symmetry in the piezoelectric properties of lead-free KNN-modified ceramics

    International Nuclear Information System (INIS)

    Rubio-Marcos, F.; Marchet, P.; Merle-Mejean, T.; Fernandez, J.F.

    2010-01-01

    Lead-free KNN-modified piezoceramics of the system (Li,Na,K)(Nb,Ta,Sb)O 3 were prepared by conventional solid-state sintering. The X-ray diffraction patterns revealed a perovskite phase, together with some minor secondary phase, which was assigned to K 3 LiNb 6 O 17 , tetragonal tungsten-bronze (TTB). A structural evolution toward a pure tetragonal structure with the increasing sintering time was observed, associated with the decrease of TTB phase. A correlation between higher tetragonality and higher piezoelectric response was clearly evidenced. Contrary to the case of the LiTaO 3 modified KNN, very large abnormal grains with TTB structure were not detected. As a consequence, the simultaneous modification by tantalum and antimony seems to induce during sintering a different behaviour from the one of LiTaO 3 modified KNN.

  15. Role of sintering time, crystalline phases and symmetry in the piezoelectric properties of lead-free KNN-modified ceramics

    Energy Technology Data Exchange (ETDEWEB)

    Rubio-Marcos, F., E-mail: frmarcos@icv.csic.es [Electroceramic Department, Instituto de Ceramica y Vidrio, CSIC, Kelsen 5, 28049 Madrid (Spain); Marchet, P.; Merle-Mejean, T. [SPCTS, UMR 6638 CNRS, Universite de Limoges, 123, Av. A. Thomas, 87060 Limoges (France); Fernandez, J.F. [Electroceramic Department, Instituto de Ceramica y Vidrio, CSIC, Kelsen 5, 28049 Madrid (Spain)

    2010-09-01

    Lead-free KNN-modified piezoceramics of the system (Li,Na,K)(Nb,Ta,Sb)O{sub 3} were prepared by conventional solid-state sintering. The X-ray diffraction patterns revealed a perovskite phase, together with some minor secondary phase, which was assigned to K{sub 3}LiNb{sub 6}O{sub 17}, tetragonal tungsten-bronze (TTB). A structural evolution toward a pure tetragonal structure with the increasing sintering time was observed, associated with the decrease of TTB phase. A correlation between higher tetragonality and higher piezoelectric response was clearly evidenced. Contrary to the case of the LiTaO{sub 3} modified KNN, very large abnormal grains with TTB structure were not detected. As a consequence, the simultaneous modification by tantalum and antimony seems to induce during sintering a different behaviour from the one of LiTaO{sub 3} modified KNN.

  16. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  17. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  18. The Impact of Weather Forecasts of Various Lead Times on Snowmaking Decisions Made for the 2010 Vancouver Olympic Winter Games

    Science.gov (United States)

    Doyle, Chris

    2014-01-01

    The Vancouver 2010 Winter Olympics were held from 12 to 28 February 2010, and the Paralympic events followed 2 weeks later. During the Games, the weather posed a grave threat to the viability of one venue and created significant complications for the event schedule at others. Forecasts of weather with lead times ranging from minutes to days helped organizers minimize disruptions to sporting events and helped ensure all medal events were successfully completed. Of comparable importance, however, were the scenarios and forecasts of probable weather for the winter in advance of the Games. Forecasts of mild conditions at the time of the Games helped the Games' organizers mitigate what would have been very serious potential consequences for at least one venue. Snowmaking was one strategy employed well in advance of the Games to prepare for the expected conditions. This short study will focus on how operational decisions were made by the Games' organizers on the basis of both climatological and snowmaking forecasts during the pre-Games winter. An attempt will be made to quantify, economically, the value of some of the snowmaking forecasts made for the Games' operators. The results obtained indicate that although the economic value of the snowmaking forecast was difficult to determine, the Games' organizers valued the forecast information greatly. This suggests that further development of probabilistic forecasts for applications like pre-Games snowmaking would be worthwhile.

  19. Pulse of inflammatory proteins in the pregnant uterus of European polecats (Mustela putorius) leading to the time of implantation.

    Science.gov (United States)

    Lindeberg, Heli; Burchmore, Richard J S; Kennedy, Malcolm W

    2017-03-01

    Uterine secretory proteins protect the uterus and conceptuses against infection, facilitate implantation, control cellular damage resulting from implantation, and supply pre-implantation embryos with nutrients. Unlike in humans, the early conceptus of the European polecat ( Mustela putorius ; ferret) grows and develops free in the uterus until implanting at about 12 days after mating. We found that the proteins appearing in polecat uteri changed dramatically with time leading to implantation. Several of these proteins have also been found in pregnant uteri of other eutherian mammals. However, we found a combination of two increasingly abundant proteins that have not been recorded before in pre-placentation uteri. First, the broad-spectrum proteinase inhibitor α 2 -macroglobulin rose to dominate the protein profile by the time of implantation. Its functions may be to limit damage caused by the release of proteinases during implantation or infection, and to control other processes around sites of implantation. Second, lipocalin-1 (also known as tear lipocalin) also increased substantially in concentration. This protein has not previously been recorded as a uterine secretion in pregnancy in any species. If polecat lipocalin-1 has similar biological properties to that of humans, then it may have a combined function in antimicrobial protection and transporting or scavenging lipids. The changes in the uterine secretory protein repertoire of European polecats is therefore unusual, and may be representative of pre-placentation supportive uterine secretions in mustelids (otters, weasels, badgers, mink, wolverines) in general.

  20. Model Penjadwalan Pengiriman Pasokan pada Strategi Multi-Supplier dengan Variasi Harga dan Lead Time untuk Permintaan Stokastik

    Directory of Open Access Journals (Sweden)

    Nur Aini Masruroh

    2015-06-01

    Full Text Available Multi-supplier is one of the strategies to minimize holding cost and average stock-out cost as long as to stabilize the supply of raw materials. The common problems that the firms may face when applying the multi-supplier strategy are determining the right schedule and quantity ordered for each supplier. Complexity of the problem increases with the facts that each supplier may have different parameters, demand is uncertain, and the firms’ constraints. Thus, this research is done to answer two main objectives: (1 to determine the optimum safety time (minimum raw material inventory to prevent the stockout due to the demand uncertainty and (2 to determine the right schedule and quantity ordered for each supplier considering the different suppliers parameters: price, lead time, and supply capacity. The problem is modeled in Mixed Integer Linear Programming with total minimum inventory cost as the objective. With the aim of testing the model, a case of multinational company that apply the multi-supplier strategy is used.

  1. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  2. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  3. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  4. Elevated Blood Lead Levels by Length of Time From Resettlement to Health Screening in Kentucky Refugee Children.

    Science.gov (United States)

    Kotey, Stanley; Carrico, Ruth; Wiemken, Timothy Lee; Furmanek, Stephen; Bosson, Rahel; Nyantakyi, Florence; VanHeiden, Sarah; Mattingly, William; Zierold, Kristina M

    2018-02-01

    To examine elevated blood lead levels (EBLLs) in refugee children by postrelocation duration with control for several covariates. We assessed EBLLs (≥ 5µg/dL) between 2012 and 2016 of children younger than 15 years (n = 1950) by the duration of resettlement to health screening by using logistic regression, with control for potential confounders (gender, region of birth, age of housing, and intestinal infestation) in a cross-sectional study. Prevalence of EBLLs was 11.2%. Length of time from resettlement to health screening was inversely associated with EBLLs (tertile 2 unadjusted odds ratio [OR] = 0.79; 95% confidence interval [CI] = 0.56, 1.12; tertile 3 OR = 0.62; 95% CI = 0.42, 0.90; tertile 2 adjusted odds ratio [AOR] = 0.62; 95% CI = 0.39, 0.97; tertile 3 AOR = 0.57; 95% CI = 0.34, 0.93). There was a significant interaction between intestinal infestation and age of housing (P resettlement in unadjusted and adjusted models. Improved housing, early education, and effective safe-house inspections may be necessary to address EBLLs in refugees.

  5. Analisis Penerapan Lean Production Process untuk Mengurangi Lead Time Process Perawatan Engine (Studi Kasus PT.GMF AEROASIA

    Directory of Open Access Journals (Sweden)

    Wahyu Adrianto

    2016-04-01

    Full Text Available Engine maintenance strives to always improve its service excellence with tools such as gate system where the system is expected to realize the lead time for 60days. In the implementation of the gate system is still not able to meet the expected target. During maintenance or overhaul the engine is still encountered waste or waste that causes the target cannot be met. Lean Manufacturing is an approach that aims to minimize waste that occurs in the process flow.Understanding the conditions of the process described in Value Stream Mapping for further elaborate activities that have the value-added and non-value added.Through seven waste concept, then be weighted to determine the most dominant type of waste.From the data processing is obtained that through the Value Stream Mapping is known gate 1 and gate 3 is the point that there are many wastes. Weighting and ranking of seven existing waste in the process of the activity obtained results in the form of a waste critical sequence of seven existing waste. Highest weights on the type of waste waiting with a weight of 0.38. Results of Root Cause Analysis in mind that the root cause of waste waiting for that data is maintained, the lack of attention to people development, There are still bugs in the system and miscommunication.

  6. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  7. Determination of times maximum insulation in case of internal flooding by pipe break; Determinacion de los tiempos maximos de aislamiento en caso de inundacion interna por rotura de tuberia

    Energy Technology Data Exchange (ETDEWEB)

    Varas, M. I.; Orteu, E.; Laserna, J. A.

    2014-07-01

    This paper demonstrates the process followed in the preparation of the Manual of floods of Cofrentes NPP to identify the allowed maximum time available to the central in the isolation of a moderate or high energy pipe break, until it affects security (1E) participating in the safe stop of Reactor or in pools of spent fuel cooling-related equipment , and to determine the recommended isolation mode from the point of view of the location of the break or rupture, of the location of the 1E equipment and human factors. (Author)

  8. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  9. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  10. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  11. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  12. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  13. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  14. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  15. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  16. SureCure{sup (R)}-A new material to reduces curing time and improve curing reproducibility of lead-acid batteries

    Energy Technology Data Exchange (ETDEWEB)

    Boden, David P.; Loosemore, Daniel V.; Botts, G. Dean [Hammond Lead Products Division, Hammond Group Inc., 2323 165th Street, Hammond, IN 46320 (United States)

    2006-08-25

    This paper introduces a technology that considerably reduces the time to cure the positive plates of lead-acid batteries. In each of several full-scale trials at automotive and industrial battery manufacturers, the simple replacement of 1wt.% of leady oxide with finely-divided tetrabasic lead sulfate (SureCure(TM) by Hammond Group Inc.) is shown to accelerate significantly the conversion of tribasic lead sulfate (3BS) to tetrabasic lead sulfate (4BS) in the curing process while improving crystal structure and reproducibility. Shorter curing times result in reduced labour and energy costs, as well as reduced fixed (curing chambers and plant footprint) and working (plate inventory) capital investment. (author)

  17. Results from transcranial Doppler examination on children and adolescents with sickle cell disease and correlation between the time-averaged maximum mean velocity and hematological characteristics: a cross-sectional analytical study

    Directory of Open Access Journals (Sweden)

    Mary Hokazono

    Full Text Available CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD detects stroke risk among children with sickle cell anemia (SCA. Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD genotypes and correlate the time-averaged maximum mean (TAMM velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia; and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia. TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05. RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6% and five conditional results (8.1% in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics.

  18. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  19. LEADING WITH LEADING INDICATORS

    International Nuclear Information System (INIS)

    PREVETTE, S.S.

    2005-01-01

    This paper documents Fluor Hanford's use of Leading Indicators, management leadership, and statistical methodology in order to improve safe performance of work. By applying these methods, Fluor Hanford achieved a significant reduction in injury rates in 2003 and 2004, and the improvement continues today. The integration of data, leadership, and teamwork pays off with improved safety performance and credibility with the customer. The use of Statistical Process Control, Pareto Charts, and Systems Thinking and their effect on management decisions and employee involvement are discussed. Included are practical examples of choosing leading indicators. A statistically based color coded dashboard presentation system methodology is provided. These tools, management theories and methods, coupled with involved leadership and employee efforts, directly led to significant improvements in worker safety and health, and environmental protection and restoration at one of the nation's largest nuclear cleanup sites

  20. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  1. Achieving maximum baryon densities

    International Nuclear Information System (INIS)

    Gyulassy, M.

    1984-01-01

    In continuing work on nuclear stopping power in the energy range E/sub lab/ approx. 10 GeV/nucleon, calculations were made of the energy and baryon densities that could be achieved in uranium-uranium collisions. Results are shown. The energy density reached could exceed 2 GeV/fm 3 and baryon densities could reach as high as ten times normal nuclear densities

  2. Maximum entropy tokamak configurations

    International Nuclear Information System (INIS)

    Minardi, E.

    1989-01-01

    The new entropy concept for the collective magnetic equilibria is applied to the description of the states of a tokamak subject to ohmic and auxiliary heating. The condition for the existence of steady state plasma states with vanishing entropy production implies, on one hand, the resilience of specific current density profiles and, on the other, severe restrictions on the scaling of the confinement time with power and current. These restrictions are consistent with Goldston scaling and with the existence of a heat pinch. (author)

  3. Maximum Credible Incidents

    CERN Document Server

    Strait, J

    2009-01-01

    Following the incident in sector 34, considerable effort has been made to improve the systems for detecting similar faults and to improve the safety systems to limit the damage if a similar incident should occur. Nevertheless, even after the consolidation and repairs are completed, other faults may still occur in the superconducting magnet systems, which could result in damage to the LHC. Such faults include both direct failures of a particular component or system, or an incorrect response to a “normal” upset condition, for example a quench. I will review a range of faults which could be reasonably expected to occur in the superconducting magnet systems, and which could result in substantial damage and down-time to the LHC. I will evaluate the probability and the consequences of such faults, and suggest what mitigations, if any, are possible to protect against each.

  4. Picture archiving and communication systems lead to sustained improvements in reporting times and productivity: results of a 5-year audit

    Energy Technology Data Exchange (ETDEWEB)

    Mackinnon, A.D.; Billington, R.A.; Adam, E.J.; Dundas, D.D. [Department of Radiology, St Georges Hospital NHS Trust, London (United Kingdom); Patel, U. [Department of Radiology, St Georges Hospital NHS Trust, London (United Kingdom)], E-mail: Uday.Patel@stgeorges.nhs.uk

    2008-07-15

    Aim: To evaluate the impact of picture archiving and communications systems (PACS) on reporting times and productivity in a large teaching hospital. Materials and methods: Reporting time, defined as the time taken from patient registration to report availability, and productivity, defined as the number of reports issued per whole time equivalent (WTE) radiologist per month, were studied for 2 years pre- and 3 years post-PACS installation. Mean reporting time was calculated for plain radiographs and specialist radiology techniques [computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and nuclear medicine]. Productivity, total department workload, and unreported film rates were also assessed. Pre- and post-PACS findings were compared. Results: Between 2002-2006 the number of radiological patient episodes increased by 30% from 11,531/month to 15,057/month. This was accompanied by a smaller increase in WTE reporting radiologists, from 32 to 37 (15%). Mean reporting times have improved substantially post-PACS, plain radiograph reporting time decreased by 26% (from 6.8 to 5 days; p = 0.002) and specialty modalities by 24% (4.1 to 3.1 days; p < 0.001). Radiologist productivity has increased by 18% (337 films to 407 films/WTE radiologist/month). Unreported films have decreased from 5 to 4% for plain radiographs and are steady for specialty modalities (< 1%). In most areas improvements have been sustained over a 3-year period. Conclusion: Since the introduction of PACS, reporting times have decreased by 25% and the productivity improved by 18%. Sustained improvements are felt to reflect the efficiencies and cultural change that accompanied the introduction of PACS and digital dictation.

  5. Effect of a real-time tele-transmission system of 12-lead electrocardiogram on the first-aid for athletes with ST-elevation myocardial infarction.

    Science.gov (United States)

    Zhang, Huan; Song, Donghan; An, Lina

    2016-05-01

    To study the effect of a real-time tele-transmission system of 12-lead electrocardiogram on door-to-balloon time in athletes with ST-elevation myocardial infarction. A total of 60 athletes with chest pain diagnosed as ST-elevation myocardial infarction (STEMI) from our hospital were randomly divided into group A (n=35) and group B (n=25), the patients in group A transmitted the real-time tele-transmission system of 12-lead electrocardiogram to the chest pain center before arriving in hospital, however, the patients in group B not. The median door-to-balloon time was significant shorter in-group A than group B (38min vs 94 min, p0.05). The median length of stay was significant reduced in-group A (5 days vs 7 days, pelectrocardiogram is beneficial to the pre-hospital diagnosis of STEMI.

  6. Time-dependent inhibition of Na+/K+-ATPase induced by single and simultaneous exposure to lead and cadmium

    Science.gov (United States)

    Vasić, V.; Kojić, D.; Krinulović, K.; Čolović, M.; Vujačić, A.; Stojić, D.

    2007-09-01

    Time-dependent interactions of Na+/K+-ATPase, isolated from rat brain synaptic plasma membranes (SPMs), with Cd2+ and Pb2+ ions in a single exposure and in a mixture were investigated in vitro. The interference of the enzyme with these metal ions was studied as a function of different protein concentrations and exposure time. The aim of the work was to investigate the possibility of selective recognition of Cd2+ and Pb2+ ions in a mixture, on the basis of the different rates of their protein-ligand interactions. Decreasing protein concentration increased the sensitivity of Na+/K+-ATPase toward both metals. The selectivity in protein-ligand interactions was obtained by variation of preincubation time (incubation before starting the enzymatic reaction).

  7. Variation in pre-treatment count lead time and its effect on baseline estimates of cage-level sea lice abundance.

    Science.gov (United States)

    Gautam, R; Boerlage, A S; Vanderstichel, R; Revie, C W; Hammell, K L

    2016-11-01

    Treatment efficacy studies typically use pre-treatment sea lice abundance as the baseline. However, the pre-treatment counting window often varies from the day of treatment to several days before treatment. We assessed the effect of lead time on baseline estimates, using historical data (2010-14) from a sea lice data management programme (Fish-iTrends). Data were aggregated at the cage level for three life stages: (i) chalimus, (ii) pre-adult and adult male and (iii) adult female. Sea lice counts were log-transformed, and mean counts by lead time relative to treatment day were computed and compared separately for each life stage, using linear mixed models. There were 1,658 observations (treatment events) from 56 sites in 5 Bay Management Areas. Our study showed that lead time had a significant effect on the estimated sea lice abundance, which was moderated by season. During the late summer and autumn periods, counting on the day of treatment gave significantly higher values than other days and would be a more appropriate baseline estimate, while during spring and early summer abundance estimates were comparable among counts within 5 days of treatment. A season-based lead time window may be most appropriate when estimating baseline sea lice levels. © 2016 John Wiley & Sons Ltd.

  8. Inventory Modelling for a Manufacturer of Sweets : An Evaluation of an Adjusted Compund Renewal Approach for B-Items With A Relative Short Production Lead Time

    NARCIS (Netherlands)

    Heuts, R.M.J.; Luijten, M.L.J.

    1999-01-01

    In this paper we are especially interested how to optimize the production/inventory control for a manufacturer of sweets, under the following circumstances: short production lead times in combination with an intermittent demand pattern for the so-called B-taste items. As for A-taste items a compound

  9. Identification of appropriate lags and temporal resolutions for low flow indicators in the River Rhine to forecast low flows with different lead times

    NARCIS (Netherlands)

    Demirel, M.C.; Booij, Martijn J.; Hoekstra, Arjen Ysbert

    2013-01-01

    The aim of this paper is to assess the relative importance of low flow indicators for the River Rhine and to identify their appropriate temporal lag and resolution. This is done in the context of low flow forecasting with lead times of 14 and 90 days. First, the Rhine basin is subdivided into seven

  10. Evaluation of Integrated Time-Temperature Effect in Pyrolysis Process of Historically Contaminated Soils with Cadmium (Cd) and Lead (Pb)

    OpenAIRE

    Bulmău C; Cocârță D. M.; Reșetar-Deac A. M.

    2013-01-01

    It is already known that heavy metals pollution causes important concern to human and ecosystem health. Heavy metals in soils at the European level represents 37.3% between main contaminates affecting soils (EEA, 2007). This paper illustrates results obtained in the framework of laboratory experiments concerning the evaluation of integrated time-temperature effect in pyrolysis process applied to contaminated soil by two different ways: it is about heavy metals historically contaminated soil f...

  11. Delayed self-regulation and time-dependent chemical drive leads to novel states in epigenetic landscapes

    Science.gov (United States)

    Mitra, Mithun K.; Taylor, Paul R.; Hutchison, Chris J.; McLeish, T. C. B.; Chakrabarti, Buddhapriya

    2014-01-01

    The epigenetic pathway of a cell as it differentiates from a stem cell state to a mature lineage-committed one has been historically understood in terms of Waddington's landscape, consisting of hills and valleys. The smooth top and valley-strewn bottom of the hill represent their undifferentiated and differentiated states, respectively. Although mathematical ideas rooted in nonlinear dynamics and bifurcation theory have been used to quantify this picture, the importance of time delays arising from multistep chemical reactions or cellular shape transformations have been ignored so far. We argue that this feature is crucial in understanding cell differentiation and explore the role of time delay in a model of a single-gene regulatory circuit. We show that the interplay of time-dependent drive and delay introduces a new regime where the system shows sustained oscillations between the two admissible steady states. We interpret these results in the light of recent perplexing experiments on inducing the pluripotent state in mouse somatic cells. We also comment on how such an oscillatory state can provide a framework for understanding more general feedback circuits in cell development. PMID:25165605

  12. Florentino Ariza's Loneliness Which Leads Into Self-Actualization in Love in the TIME of Cholera Movie

    OpenAIRE

    Andro, Nobertus Riko Juni

    2013-01-01

    Loneliness is the condition when one lack of social relationship, particularly love or intimate relationship. The effect of loneliness can be varied. It must be painful but for some people it is the chance to deeply know about them so that they can achieve self actualization.Love In The Time Of Cholera is the film dealing mostly with loneliness issues. It tells about a man who is loyal to wait for his lover , Fermina Daza. In waiting his lover he suffers from loneliness yet he wants to be a r...

  13. Quasiparticle recombination time in superconducting lead and the quasiparticle nonequilibrium energy distribution of optically perturbed tin superconductors

    International Nuclear Information System (INIS)

    Jaworski, F.B.

    1978-01-01

    The effective quasiparticle recombination time in Pb superconductors was experimentally measured by optically perturbing Pb-oxide-Pb tunnel junctions. Analysis by carefully studying the optically modulated energy gap as a function of temperature determined the effective recombination time to be 2.06 x 10 - 10 T - 1/2e/sup δ//sup kT/ +- 30%. Careful studies on optically perturbed Sn-oxide-Sn tunnel junctions provide information on the quasiparticle nonequilibrium energy distribution function. Initial data compared closer with a modified heating model describing the photo-excited quasi particles rather than with an effective chemical potential model. However, an analysis of the IV characteristic of voltage-biased Sn junctions numerically unfolded the exact energy distribution from an integral equation. The results compare favorably to the theory of Chang and Scalapino, who calculate from the coupled Boltzmann kinetic equations the phonon and quasiparticle energy distributions. Lastly, a brief study describes Inelastic Electron Tunneling Spectroscopy as applied to the problem of the identification of altered DNA bases. The technique demonstrates an exciting potential application of physics to a contemporary problem in molecular biology

  14. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  15. Lead poisoning

    Science.gov (United States)

    ... drinking water in homes containing pipes that were connected with lead solder . Although new building codes require ... lead in their bodies when they put lead objects in their mouths, especially if they swallow those ...

  16. Lead Poisoning

    Science.gov (United States)

    Lead is a metal that occurs naturally in the earth's crust. Lead can be found in all parts of our ... from human activities such as mining and manufacturing. Lead used to be in paint; older houses may ...

  17. Radiation effects on lead silicate glass surfaces

    International Nuclear Information System (INIS)

    Wang, P.W.; Zhang, L.P.; Borgen, N.; Pannell, K.

    1996-01-01

    Radiation-induced changes in the microstructure of lead silicate glass were investigated in situ under Mg K α irradiation in an ultra-high vacuum (UHV) environment by X-ray photoelectron spectroscopy (XPS). Lead-oxygen bond breaking resulting in the formation of pure lead was observed. The segregation, growth kinetics and the structural relaxation of the lead, with corresponding changes in the oxygen and silicon on the glass surfaces were studied by measuring the time-dependent changes in concentration, binding energy shifts, and the full width at half maximum. A bimodal distribution of the oxygen XPS signal, caused by bridging and non-bridging oxygens, was found during the relaxation process. All experimental data indicate a reduction of the oxygen concentration, a phase separation of the lead from the glass matrix, and the metallization of the lead occurred during and after the X-ray irradiation. (author)

  18. Leading Learning in Our Times

    Science.gov (United States)

    Trilling, Bernie

    2010-01-01

    Important tools that schools need to support a 21st century approach to teaching and learning include the usual suspects: the Internet, pen and paper, cell phones, educational games, tests and quizzes, good teachers, caring communities, educational funding, and loving parents. All of these items and more contribute to a 21st century education, but…

  19. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  20. A reengineering success story: process improvement in emergency department x-ray cycle time, leading to breakthrough performance in the ED ambulatory care (Fast Track) process.

    Science.gov (United States)

    Espinosa, J A; Treiber, P M; Kosnik, L

    1997-01-01

    This article describes the journey of a multidisciplinary reengineering team, which worked to reduce a critical, high-leverage process in an emergency department setting. The process selected was emergency department radiology services. This process was selected on a rational basis. The team knew tht 60 percent of our emergency department patients were truly ambulatory, and that most could be seen in a "fast track" process as part of our emergency department's core mission. However, we knew from customer satisfaction data, that patients would like to be "in and out" of emergency department Fast Track in less than an hour. Over half of our Fast Track patients require x-rays. For most, this was their sole reason for seeking emergency care. Our state, at the start of the project, included an average x-ray cycle time of over 60 minutes. The associated Fast-Track cycle time was over 90 minutes median. It was clear to the emergency department leadership, as well as to members of the Fast-Track management team, that a cycle time of 30 minutes or less for x-ray service was needed as a necessary condition to an hour or less Fast Track cycle time. It was also felt that a more rapid x-ray cycle time would allow for more rapid turn over of ED rooms, leading to a virtual greater capacity to the ED. It was hoped that this would lead to a reduction in the time from arrival to treatment by the emergency physician for all patients.

  1. Superconductivity of tribolayers formed on germanium by friction between germanium and lead

    Energy Technology Data Exchange (ETDEWEB)

    Dukhovskoi, A.; Karapetyan, S.S.; Morozov, Y.G.; Onishchenko, A.S.; Petinov, V.I.; Ponomarev, A.N.; Silin, A.A.; Stepanov, B.M.; Tal' roze, V.L.

    1978-04-05

    A superconducting state was observed for the first time in tribolayers of germanium produced by friction of germanium with lead at 42 K. The maximum value of T/sub c/ obtained in the experiment was 19 K, which is much higher than T/sub c/ of bulk lead itself or of lead films sputtered on germanium.

  2. Medição e controle do tempo de atravessamento em um sistema de manufatura Measurement and control of lead-time in a manufacturing system

    Directory of Open Access Journals (Sweden)

    Miguel Afonso Sellitto

    2008-04-01

    Full Text Available Este artigo apresenta um método para a medição do tempo de atravessamento e do inventário em processo em manufatura. A medição foi usada em processo de controle organizacional de manufatura, cuja variável controlada foi o tempo de atravessamento de ordens. Tal controle pode ser útil em estratégias de manufatura em que a competição é baseada no uso do tempo, a TBC (time-based competition. Apresentou-se o método de medição, que inclui elementos da teoria das filas, e considerações para a simplificação de arranjos produtivos de manufatura. Para testar e refinar o método, estudou-se um caso em manufatura calçadista. Foram coletados dados de remessas e obtido, por análise estatística e simulação computacional, o comportamento da variável aleatória tempo de atravessamento de remessas. Como a medição foi inferior ao objetivo de desempenho, partiu-se para o controle, fazendo um diagnóstico que apontou efeitos indesejáveis observados que foram endereçados por ações corretivas implementadas. Para verificar a eficácia das ações, nova coleta foi feita. Desta vez, o objetivo de desempenho das entregas foi atendido, fechando um ciclo de controle. Os resultados foram discutidos, chegando-se a conclusões e alternativas de continuidade.This paper presents a method for the measurement of the lead-time and work-in-process in manufacturing. The measurement was used in an organizational control process in manufacturing, in which the controlled variate was the order lead-time. This kind of control can be useful in manufacture strategies formulated to compete in TBC (time-based competition. We presented the method, who includes elements from the queuing theory, and concerns about simplifying complex manufacturing arrays, in order to facilitate the analysis. In order to test and refine the method, we studied a case in a footwear manufacture system. We collected data from orders and, by statistical techniques and computational

  3. Lead poisoning

    Energy Technology Data Exchange (ETDEWEB)

    Beijers, J A

    1952-01-01

    Three cases of acute lead poisoning of cattle herds via ingestion are reported, and reference is made to several other incidents of lead in both humans and animals. The quantity of lead which was found in the livers of the dead cows varied from 6.5 to 19 mg/kg, while 1160 mg/kg of lead in the liver was found for a young cow which was poisoned experimentally with 5 gms of lead acetate per day; hence, there appears to be great variability in the amounts deposited that can lead to intoxication and death. No evidence was found for a lead seam around the teeth, prophyrinuria, or basophil granules in the erythrocytes during acute or chronic lead poisoning of cattle or horses examined. Reference is made to attempts of finding the boundary line between increased lead absorption and lead intoxication in humans, and an examination of 60 laborers in an offset-printing office containing a great deal of inhalable lead (0.16 to 1.9 mg/cu m air) is reviewed. Physical deviation, basophylic granulation of erythrocytes, increased lead content of the urine, and porphyrinuria only indicate an increased absorption of lead; the use of the term intoxication is justified if, in addition, there are complaints of lack of appetite, constipation, fatigue, abdominal pain, and emaciation.

  4. Lead Toxicity

    Science.gov (United States)

    ... o Do not use glazed ceramics, home remedies, cosmetics, or leaded-crystal glassware unless you know that they are lead safe. o If you live near an industry, mine, or waste site that may have contaminated ...

  5. Diagnostic value of ST-segment deviations during cardiac exercise stress testing: Systematic comparison of different ECG leads and time-points.

    Science.gov (United States)

    Puelacher, Christian; Wagener, Max; Abächerli, Roger; Honegger, Ursina; Lhasam, Nundsin; Schaerli, Nicolas; Prêtre, Gil; Strebel, Ivo; Twerenbold, Raphael; Boeddinghaus, Jasper; Nestelberger, Thomas; Rubini Giménez, Maria; Hillinger, Petra; Wildi, Karin; Sabti, Zaid; Badertscher, Patrick; Cupa, Janosch; Kozhuharov, Nikola; du Fay de Lavallaz, Jeanne; Freese, Michael; Roux, Isabelle; Lohrmann, Jens; Leber, Remo; Osswald, Stefan; Wild, Damian; Zellweger, Michael J; Mueller, Christian; Reichlin, Tobias

    2017-07-01

    Exercise ECG stress testing is the most widely available method for evaluation of patients with suspected myocardial ischemia. Its major limitation is the relatively poor accuracy of ST-segment changes regarding ischemia detection. Little is known about the optimal method to assess ST-deviations. A total of 1558 consecutive patients undergoing bicycle exercise stress myocardial perfusion imaging (MPI) were enrolled. Presence of inducible myocardial ischemia was adjudicated using MPI results. The diagnostic value of ST-deviations for detection of exercise-induced myocardial ischemia was systematically analyzed 1) for each individual lead, 2) at three different intervals after the J-point (J+40ms, J+60ms, J+80ms), and 3) at different time points during the test (baseline, maximal workload, 2min into recovery). Exercise-induced ischemia was detected in 481 (31%) patients. The diagnostic accuracy of ST-deviations was highest at +80ms after the J-point, and at 2min into recovery. At this point, ST-amplitude showed an AUC of 0.63 (95% CI 0.59-0.66) for the best-performing lead I. The combination of ST-amplitude and ST-slope in lead I did not increase the AUC. Lead I reached a sensitivity of 37% and a specificity of 83%, with similar sensitivity to manual ECG analysis (34%, p=0.31) but lower specificity (90%, pST-deviations is highest when evaluated at +80ms after the J-point, and at 2min into recovery. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Relational Leading

    DEFF Research Database (Denmark)

    Larsen, Mette Vinther; Rasmussen, Jørgen Gulddahl

    2015-01-01

    This first chapter presents the exploratory and curious approach to leading as relational processes – an approach that pervades the entire book. We explore leading from a perspective that emphasises the unpredictable challenges and triviality of everyday life, which we consider an interesting......, relevant and realistic way to examine leading. The chapter brings up a number of concepts and contexts as formulated by researchers within the field, and in this way seeks to construct a first understanding of relational leading....

  7. Optimal (R, Q) policy and pricing for two-echelon supply chain with lead time and retailer's service-level incomplete information

    Science.gov (United States)

    Esmaeili, M.; Naghavi, M. S.; Ghahghaei, A.

    2018-03-01

    Many studies focus on inventory systems to analyze different real-world situations. This paper considers a two-echelon supply chain that includes one warehouse and one retailer with stochastic demand and an up-to-level policy. The retailer's lead time includes the transportation time from the warehouse to the retailer that is unknown to the retailer. On the other hand, the warehouse is unaware of retailer's service level. The relationship between the retailer and the warehouse is modeled based on the Stackelberg game with incomplete information. Moreover, their relationship is presented when the warehouse and the retailer reveal their private information using the incentive strategies. The optimal inventory and pricing policies are obtained using an algorithm based on bi-level programming. Numerical examples, including sensitivity analysis of some key parameters, will compare the results between the Stackelberg models. The results show that information sharing is more beneficial to the warehouse rather than the retailer.

  8. Lead Test

    Science.gov (United States)

    ... to do renovation and repair projects using lead-safe work practices to avoid creating more lead dust or ... in a dangerous area? Yes. If you are working in a potentially harmful environment with exposure to lead dust or fumes: Wash ...

  9. Endovascular aneurysm repair simulation can lead to decreased fluoroscopy time and accurately delineate the proximal seal zone.

    Science.gov (United States)

    Kim, Ann H; Kendrick, Daniel E; Moorehead, Pamela A; Nagavalli, Anil; Miller, Claire P; Liu, Nathaniel T; Wang, John C; Kashyap, Vikram S

    2016-07-01

    The use of simulators for endovascular aneurysm repair (EVAR) is not widespread. We examined whether simulation could improve procedural variables, including operative time and optimizing proximal seal. For the latter, we compared suprarenal vs infrarenal fixation endografts, right femoral vs left femoral main body access, and increasing angulation of the proximal aortic neck. Computed tomography angiography was obtained from 18 patients who underwent EVAR at a single institution. Patient cases were uploaded to the ANGIO Mentor endovascular simulator (Simbionix, Cleveland, Ohio) allowing for three-dimensional reconstruction and adapted for simulation with suprarenal fixation (Endurant II; Medtronic Inc, Minneapolis, Minn) and infrarenal fixation (C3; W. L. Gore & Associates Inc, Newark, Del) deployment systems. Three EVAR novices and three experienced surgeons performed 18 cases from each side with each device in randomized order (n = 72 simulations/participant). The cases were stratified into three groups according to the degree of infrarenal angulation: 0° to 20°, 21° to 40°, and 41° to 66°. Statistical analysis used paired t-test and one-way analysis of variance. Mean fluoroscopy time for participants decreased by 48.6% (P time decreased by 33.8% (P zone coverage in highly angulated aortic necks was significantly decreased. The infrarenal device resulted in mean aortic neck zone coverage of 91.9%, 89.4%, and 75.4% (P zone coverage. The side of femoral access for the main body did not influence proximal seal zone coverage regardless of infrarenal angulation. Simulation of EVAR leads to decreased fluoroscopy times for novice and experienced operators. Side of femoral access did not affect precision of proximal endograft landing. The angulated aortic neck leads to decreased proximal seal zone coverage regardless of infrarenal or suprarenal fixation devices. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  10. Misplaced Inventory and Lead-Time in the Supply Chain: Analysis of Decision-Making on RFID Investment with Service Level

    Directory of Open Access Journals (Sweden)

    Li-Hao Zhang

    2014-01-01

    Full Text Available Radio-frequency identification (RFID, as the key technology of Internet of Things (IoT, has been hailed as a major innovation to solve misplaced inventory and reduce lead-time. Many retailers have been pushing their suppliers to invest this technology. However, its associated costs seem to prohibit its widespread application. This paper analyzes the situation of service level in a retail supply chain, which has resulted from misplaced inventory and lead-time. By newsvendor model, we analyze the difference between with- and without-RFID technologies in service level of centralized and decentralized supply chains, respectively. Then with different service levels, we determine the tag cost thresholds at which RFID technology investment becomes profitable in centralized and decentralized supply chains, respectively. Furthermore, we apply a linear transfer payment coefficient strategy to coordinate with the decentralized supply chain. It is found that whether the adoption of RFID technology improves the service level depends on the cost of RFID tag in the centralized system, but it improves the service level in the decentralized system when only the supplier bears the cost of RFID tag. Moreover, the same cost thresholds of RFID tag with different service levels exist in both the centralized and the decentralized cases.

  11. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  12. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  13. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  14. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  15. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  16. Real time and in situ determination of lead in road sediments using a man-portable laser-induced breakdown spectroscopy analyzer

    Energy Technology Data Exchange (ETDEWEB)

    Cunat, J.; Fortes, F.J. [Department of Analytical Chemistry, University of Malaga, E-29071 Malaga (Spain); Laserna, J.J. [Department of Analytical Chemistry, University of Malaga, E-29071 Malaga (Spain)], E-mail: laserna@uma.es

    2009-02-02

    In situ, real time levels of lead in road sediments have been measured using a man-portable laser-induced breakdown spectroscopy analyzer. The instrument consists of a backpack and a probe housing a Q-switched Nd:YAG laser head delivering 50 mJ per pulse at 1064 nm. Plasma emission was collected and transmitted via fiber optic to a compact cross Czerny-Turner spectrometer equipped with a linear CCD array allocated in the backpack together with a personal computer. The limit of detection (LOD) for lead and the precision measured in the laboratory were 190 {mu}g g{sup -1} (calculated by the 3{sigma} method) and 9% R.S.D. (relative standard deviation), respectively. During the field campaign, averaged Pb concentration in the sediments were ranging from 480 {mu}g g{sup -1} to 660 {mu}g g{sup -1} depending on the inspected area, i.e. the entrance, the central part and the exit of the tunnel. These results were compared with those obtained with flame-atomic absorption spectrometry (flame-AAS). The relative error, expressed as [100(LIBS result - flame AAS result)/(LIBS result)], was approximately 14%.

  17. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  18. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  19. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  20. Leading Democratically

    Science.gov (United States)

    Brookfield, Stephen

    2010-01-01

    Democracy is the most venerated of American ideas, the one for which wars are fought and people die. So most people would probably agree that leaders should be able to lead well in a democratic society. Yet, genuinely democratic leadership is a relative rarity. Leading democratically means viewing leadership as a function or process, rather than…

  1. Superconductivity in nanostructured lead

    Science.gov (United States)

    Lungu, Anca; Bleiweiss, Michael; Amirzadeh, Jafar; Saygi, Salih; Dimofte, Andreea; Yin, Ming; Iqbal, Zafar; Datta, Timir

    2001-01-01

    Three-dimensional nanoscale structures of lead were fabricated by electrodeposition of pure lead into artificial porous opal. The size of the metallic regions was comparable to the superconducting coherence length of bulk lead. Tc as high as 7.36 K was observed, also d Tc/d H was 2.7 times smaller than in bulk lead. Many of the characteristics of these differ from bulk lead, a type I superconductor. Irreversibility line and magnetic relaxation rates ( S) were also studied. S( T) displayed two maxima, with a peak value about 10 times smaller than that of typical high- Tc superconductors.

  2. Uranium-lead systematics

    International Nuclear Information System (INIS)

    Wickman, F.E.

    1983-01-01

    The method of Levchenkov and Shukolyukov for calculating age and time disturbance of minerals without correction for original lead is generalized to include the cases when (1) original lead and radiogenic lead leach differently, and (2) the crystals studied consist of a core and a mantle. It is also shown that a straight line obtained from the solution of the equations is the locus of the isotopic composition of original lead. (Auth.)

  3. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  4. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  5. Estudo do efeito de programas de melhoria contínua em variáveis do chão de fábrica na relação entre tamanho de lote de produção e lead time: lead time relationship Study of the effect of continuous improvement programs on lot size

    Directory of Open Access Journals (Sweden)

    Moacir Godinho Filho

    2010-01-01

    Full Text Available No atual ambiente competitivo, a tomada de decisão tem se tornado uma tarefa cada vez mais complicada, envolvendo inúmeras variáveis, e suas relações, nem sempre claramente entendidas. Uma dessas relações, foco deste artigo, é a relação entre tamanho de lote de produção e lead time médio. Essa relação é amplamente conhecida na literatura específica sobre teoria de filas, porém o mesmo não acontece na prática em gestão de operações. Este artigo trata deste assunto, objetivando apresentar e comparar o efeito de seis programas de melhoria contínua em variáveis do chão de fábrica (variabilidade da chegada das ordens, variabilidade do processo, taxa de defeito, tempo até a falha, tempo de reparo e tempo de set up na relação tamanho de lote × lead time em um ambiente de máquina única que fabrica múltiplos produtos. Isto é feito por meio de uma combinação das abordagens System Dynamics (Forrester, 1962 e Factory Physics (Hopp; Spearman, 2008. Dois conjuntos de experimentos são realizados: i Uma melhoria de grandes proporções (50% em cada uma das variáveis separadamente, como aquela que seria obtida por um grande investimento; ii Uma pequena melhoria em todas as variáveis simultaneamente. Os resultados mostraram: (a o efeito positivo de programas de melhoria contínua em variáveis do chão de fábrica no lead time; (b a importância de se conhecer a curva tamanho de lote × lead time e o papel da redução de set up antes de se iniciarem programas de redução de tamanhos de lote; (c que investir em pequenas melhorias em muitas variáveis de forma simultânea é uma política melhor, com relação ao lead time, do que realizar uma grande melhoria em somente uma variável; (d algumas contribuições para um melhor entendimento de modernos paradigmas de gestão da produção, tais como Lean Manufaturing e Quick Response Manufacturing.Many modern manufacturing management approaches present relationships between

  6. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  7. Leading change.

    Science.gov (United States)

    2017-02-27

    In response to feedback from nursing, midwifery and other care staff who wanted to understand better how the Leading Change, Adding Value framework applies to them, NHS England has updated its webpage to include practice examples.

  8. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  9. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  10. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  11. Effects of variability in probable maximum precipitation patterns on flood losses

    Science.gov (United States)

    Zischg, Andreas Paul; Felder, Guido; Weingartner, Rolf; Quinn, Niall; Coxon, Gemma; Neal, Jeffrey; Freer, Jim; Bates, Paul

    2018-05-01

    The assessment of the impacts of extreme floods is important for dealing with residual risk, particularly for critical infrastructure management and for insurance purposes. Thus, modelling of the probable maximum flood (PMF) from probable maximum precipitation (PMP) by coupling hydrological and hydraulic models has gained interest in recent years. Herein, we examine whether variability in precipitation patterns exceeds or is below selected uncertainty factors in flood loss estimation and if the flood losses within a river basin are related to the probable maximum discharge at the basin outlet. We developed a model experiment with an ensemble of probable maximum precipitation scenarios created by Monte Carlo simulations. For each rainfall pattern, we computed the flood losses with a model chain and benchmarked the effects of variability in rainfall distribution with other model uncertainties. The results show that flood losses vary considerably within the river basin and depend on the timing and superimposition of the flood peaks from the basin's sub-catchments. In addition to the flood hazard component, the other components of flood risk, exposure, and vulnerability contribute remarkably to the overall variability. This leads to the conclusion that the estimation of the probable maximum expectable flood losses in a river basin should not be based exclusively on the PMF. Consequently, the basin-specific sensitivities to different precipitation patterns and the spatial organization of the settlements within the river basin need to be considered in the analyses of probable maximum flood losses.

  12. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance-Structure Models to Block-Toeplitz Representing Single-Subject Multivariate Time-Series

    NARCIS (Netherlands)

    Molenaar, P.C.M.; Nesselroade, J.R.

    1998-01-01

    The study of intraindividual variability pervades empirical inquiry in virtually all subdisciplines of psychology. The statistical analysis of multivariate time-series data - a central product of intraindividual investigations - requires special modeling techniques. The dynamic factor model (DFM),

  13. Superfast maximum-likelihood reconstruction for quantum tomography

    Science.gov (United States)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  14. System for memorizing maximum values

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1992-08-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  15. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  16. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  17. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  18. Cochrane Rapid Reviews Methods Group to play a leading role in guiding the production of informed high-quality, timely research evidence syntheses.

    Science.gov (United States)

    Garritty, Chantelle; Stevens, Adrienne; Gartlehner, Gerald; King, Valerie; Kamel, Chris

    2016-10-28

    Policymakers and healthcare stakeholders are increasingly seeking evidence to inform the policymaking process, and often use existing or commissioned systematic reviews to inform decisions. However, the methodologies that make systematic reviews authoritative take time, typically 1 to 2 years to complete. Outside the traditional SR timeline, "rapid reviews" have emerged as an efficient tool to get evidence to decision-makers more quickly. However, the use of rapid reviews does present challenges. To date, there has been limited published empirical information about this approach to compiling evidence. Thus, it remains a poorly understood and ill-defined set of diverse methodologies with various labels. In recent years, the need to further explore rapid review methods, characteristics, and their use has been recognized by a growing network of healthcare researchers, policymakers, and organizations, several with ties to Cochrane, which is recognized as representing an international gold standard for high-quality, systematic reviews. In this commentary, we introduce the newly established Cochrane Rapid Reviews Methods Group developed to play a leading role in guiding the production of rapid reviews given they are increasingly employed as a research synthesis tool to support timely evidence-informed decision-making. We discuss how the group was formed and outline the group's structure and remit. We also discuss the need to establish a more robust evidence base for rapid reviews in the published literature, and the importance of promoting registration of rapid review protocols in an effort to promote efficiency and transparency in research. As with standard systematic reviews, the core principles of evidence-based synthesis should apply to rapid reviews in order to minimize bias to the extent possible. The Cochrane Rapid Reviews Methods Group will serve to establish a network of rapid review stakeholders and provide a forum for discussion and training. By facilitating

  19. Maximum physical capacity testing in cancer patients undergoing chemotherapy

    DEFF Research Database (Denmark)

    Knutsen, L.; Quist, M; Midtgaard, J

    2006-01-01

    BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...

  20. Ecotoxicology: Lead

    Science.gov (United States)

    Scheuhammer, A.M.; Beyer, W.N.; Schmitt, C.J.; Jorgensen, Sven Erik; Fath, Brian D.

    2008-01-01

    Lead (Pb) is a naturally occurring metallic element; trace concentrations are found in all environmental media and in all living things. However, certain human activities, especially base metal mining and smelting; combustion of leaded gasoline; the use of Pb in hunting, target shooting, and recreational angling; the use of Pb-based paints; and the uncontrolled disposal of Pb-containing products such as old vehicle batteries and electronic devices have resulted in increased environmental levels of Pb, and have created risks for Pb exposure and toxicity in invertebrates, fish, and wildlife in some ecosystems.

  1. Objective Bayesianism and the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Jon Williamson

    2013-09-01

    Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

  2. Leading men

    DEFF Research Database (Denmark)

    Bekker-Nielsen, Tønnes

    2016-01-01

    Through a systematic comparison of c. 50 careers leading to the koinarchate or high priesthood of Asia, Bithynia, Galatia, Lycia, Macedonia and coastal Pontus, as described in funeral or honorary inscriptions of individual koinarchs, it is possible to identify common denominators but also disting...

  3. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  4. Irreversible and endoreversible behaviors of the LD-model for heat devices: the role of the time constraints and symmetries on the performance at maximum χ figure of merit

    Science.gov (United States)

    Gonzalez-Ayala, Julian; Calvo Hernández, A.; Roco, J. M. M.

    2016-07-01

    The main unified energetic properties of low dissipation heat engines and refrigerator engines allow for both endoreversible or irreversible configurations. This is accomplished by means of the constraints imposed on the characteristic global operation time or the contact times between the working system with the external heat baths and modulated by the dissipation symmetries. A suited unified figure of merit (which becomes power output for heat engines) is analyzed and the influence of the symmetries on the optimum performance discussed. The obtained results, independent on any heat transfer law, are faced with those obtained from Carnot-like heat models where specific heat transfer laws are needed. Thus, it is shown that only the inverse phenomenological law, often used in linear irreversible thermodynamics, correctly reproduces all optimized values for both the efficiency and coefficient of performance values.

  5. Verification of radiation exposure using lead shields

    International Nuclear Information System (INIS)

    Hayashida, Keiichi; Yamamoto, Kenyu; Azuma, Masami

    2016-01-01

    A long time use of radiation during IVR (intervention radiology) treatment leads up to an increased exposure on IVR operator. In order to prepare good environment for the operator to work without worry about exposure, the authors examined exposure reduction with the shields attached to the angiography instrument, i. e. lead curtain and lead glass. In this study, the lumber spine phantom was radiated using the instrument and the radiation leaked outside with and without shields was measured by the ionization chamber type survey meter. The meter was placed at the position which was considered to be that for IVR operator, and changed vertically 20-100 cm above X-ray focus by 10 cm interval. The radiation at the position of 80 cm above X-ray focus was maximum without shield and was hardly reduced with lead curtain. However, it was reduced with lead curtain plus lead glass. Similar reduction effects were observed at the position of 90-100 cm above X-ray focus. On the other hand, the radiation at the position of 70 cm above X-ray focus was not reduced with either shield, because that position corresponded to the gap between lead curtain and lead glass. The radiation at the position of 20-60 cm above X-ray focus was reduced with lead curtain, even if without lead glass. These results show that lead curtain and lead glass attached to the instrument can reduce the radiation exposure on IVR operator. Using these shields is considered to be one of good means for IVR operator to work safely. (author)

  6. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  7. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  8. Who Leads China's Leading Universities?

    Science.gov (United States)

    Huang, Futao

    2017-01-01

    This study attempts to identify the major characteristics of two different groups of institutional leaders in China's leading universities. The study begins with a review of relevant literature and theory. Then, there is a brief introduction to the selection of party secretaries, deputy secretaries, presidents and vice presidents in leading…

  9. Lead content of roadside fruit and berries

    Energy Technology Data Exchange (ETDEWEB)

    Fowles, G W.A.

    1976-01-01

    Blackberries, elderberries, hawthorn berries, holly berries and rose hips have been examined for their lead content, which has been shown to be directly related to the proximity of the growing fruit and berries to roads, the traffic density and the time of exposure. The maximum levels found (in ppm for undried fruit and berries) were blackberries 0.85, elderberries 6.77, hawthorn berries 23.8, holly berries 3.5 and rose hips 1.45. Very thorough washing with water removed 40-60% of the lead from heavily contaminated fruit and berries. When elderberries were used for winemaking over 60% of the lead was extracted and remained in solution in the wine. 25 references, 4 tables.

  10. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  11. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  12. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.

    2012-01-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous

  13. Leading lead through the LHC

    CERN Multimedia

    2011-01-01

    Three of the LHC experiments - ALICE, ATLAS and CMS - will be studying the upcoming heavy-ion collisions. Given the excellent results from the short heavy-ion run last year, expectations have grown even higher in experiment control centres. Here they discuss their plans:   ALICE For the upcoming heavy-ion run, the ALICE physics programme will take advantage of a substantial increase of the LHC luminosity with respect to last year’s heavy-ion run.  The emphasis will be on the acquisition of rarely produced signals by implementing selective triggers. This is a different operation mode to that used during the first low luminosity heavy-ion run in 2010, when only minimum-bias triggered events were collected. In addition, ALICE will benefit from increased acceptance coverage by the electromagnetic calorimeter and the transition radiation detector. In order to double the amount of recorded events, ALICE will exploit the maximum available bandwidth for mass storage at 4 GB/s and t...

  14. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  15. Blood lead and lead-210 origins in residents of Toulouse

    International Nuclear Information System (INIS)

    Servant, J.; Delapart, M.

    1981-01-01

    Blood lead and lead-210 analyses were performed on blood samples from non-smoking residents of Toulouse (city of 400,000 inhabitants). Simultaneous surface soil lead content determinations were carried out by the same procedure on rural zone samples of southwestern France. The observed isotopic ratios were compared in order to evaluate food chain contamination. For an average of 19.7 +- 5.8 μg 100 cc -1 of lead in blood, atmospheric contamination amounts to 20%, estimated as follows: 6% from direct inhalation and 14% from dry deposits on vegetation absorbed as food. The natural levels carried over by the food chain reach 14.9 μg 100 cc -1 and have a 210 Pb/Pb concentration ratio of 0.055 dpmμg -1 . These results lead to a maximum value of 15 μg 100 cc -1 for natural lead in human blood according to the ICRP model. (author)

  16. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  17. Photocatalyzed removal of lead ion from lead-chelator solution

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Young Hyun; Na, Jung Won; Sung, Ki Woung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-07-01

    The present study was undertaken to examine the influence of such chelating agents on the ease and speed of photocatalyzed metal removal and deposition. With excess EDTA, the free EDTA competes with Pb for oxidation, and at a ten fold excess, no lead oxidation (hence removal) occurs. With insufficient EDTA, the corresponding initial concentration of Pb-EDTA is decreased; after its destruction, the remaining Pb{sup 2+} is removed more slowly, at rates found with lead nitrate solution. The net result is that the maximum rate of lead deposition occurs at the stoichiometric ratio of 1:1 EDTA : Pb{sup 2+}.

  18. Blood lead and carboxyhemoglobin levels in chainsaw operators.

    Science.gov (United States)

    van Netten, C; Brubaker, R L; Mackenzie, C J; Godolphin, W J

    1987-06-01

    Fallers in the British Columbia west coast lumber industry often work in climatic and local conditions where little ventilation in their immediate environment is possible. Under these conditions carbon monoxide (CO) and lead fumes from exhaust gases could build up and become a serious occupational hazard. This study monitored the environmental exposure of six fallers to carbon monoxide, nitrogen oxides, and lead under conditions where buildup of these agents would be expected. At the same time blood samples were taken to correlate these environmental concentrations to carboxyhemoglobin (COHb) and blood lead levels. Although there was a highly significant difference between the fallers and the controls regarding the exposure to CO and lead as well as their corresponding COHb and blood lead levels, the environmental and blood concentration of the agents in question did not exceed the maximum allowable concentrations. Temporary short fluctuations in carboxyhemoglobin levels were not monitored in this study and cannot be ruled out as a potential occupational hazard.

  19. Blood lead and carboxyhemoglobin levels in chainsaw operators

    Energy Technology Data Exchange (ETDEWEB)

    van Netten, C.; Brubaker, R.L.; Mackenzie, C.J.; Godolphin, W.J.

    1987-06-01

    Fallers in the British Columbia west coast lumber industry often work in climatic and local conditions where little ventilation in their immediate environment is possible. Under these conditions carbon monoxide (CO) and lead fumes from exhaust gases could build up and become a serious occupational hazard. This study monitored the environmental exposure of six fallers to carbon monoxide, nitrogen oxides, and lead under conditions where buildup of these agents would be expected. At the same time blood samples were taken to correlate these environmental concentrations to carboxyhemoglobin (COHb) and blood lead levels. Although there was a highly significant difference between the fallers and the controls regarding the exposure to CO and lead as well as their corresponding COHb and blood lead levels, the environmental and blood concentration of the agents in question did not exceed the maximum allowable concentrations. Temporary short fluctuations in carboxyhemoglobin levels were not monitored in this study and cannot be ruled out as a potential occupational hazard.

  20. 75 FR 76336 - Notice of Data Availability Regarding Two Studies of Ambient Lead Concentrations Near a General...

    Science.gov (United States)

    2010-12-08

    ... studies are located in Docket ID No. EPA-HQ-OAR-2006-0735. II. How does this information relate to the.... Modeling results show aircraft engine run-up is the most important source contribution to the maximum lead concentration. Sensitivity analysis shows that engine run-up time, lead concentration in aviation gasoline, and...

  1. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  2. Drivers anticipate lead-vehicle conflicts during automated longitudinal control: Sensory cues capture driver attention and promote appropriate and timely responses.

    Science.gov (United States)

    Morando, Alberto; Victor, Trent; Dozza, Marco

    2016-12-01

    Adaptive Cruise Control (ACC) has been shown to reduce the exposure to critical situations by maintaining a safe speed and headway. It has also been shown that drivers adapt their visual behavior in response to the driving task demand with ACC, anticipating an impending lead vehicle conflict by directing their eyes to the forward path before a situation becomes critical. The purpose of this paper is to identify the causes related to this anticipatory mechanism, by investigating drivers' visual behavior while driving with ACC when a potential critical situation is encountered, identified as a forward collision warning (FCW) onset (including false positive warnings). This paper discusses how sensory cues capture attention to the forward path in anticipation of the FCW onset. The analysis used the naturalistic database EuroFOT to examine visual behavior with respect to two manually-coded metrics, glance location and glance eccentricity, and then related the findings to vehicle data (such as speed, acceleration, and radar information). Three sensory cues (longitudinal deceleration, looming, and brake lights) were found to be relevant for capturing driver attention and increase glances to the forward path in anticipation of the threat; the deceleration cue seems to be dominant. The results also show that the FCW acts as an effective attention-orienting mechanism when no threat anticipation is present. These findings, relevant to the study of automation, provide additional information about drivers' response to potential lead-vehicle conflicts when longitudinal control is automated. Moreover, these results suggest that sensory cues are important for alerting drivers to an impending critical situation, allowing for a prompt reaction. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules

    DEFF Research Database (Denmark)

    Gao, Junling; Chen, Min

    2013-01-01

    Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...... that the main cause is the influence of various currents on the produced electromotive potential. A simple and effective calibration method is proposed to minimize the deviations in specifying the maximum power. Experimental results validate the method with improved estimation accuracy....

  4. Stationary neutrino radiation transport by maximum entropy closure

    International Nuclear Information System (INIS)

    Bludman, S.A.

    1994-11-01

    The authors obtain the angular distributions that maximize the entropy functional for Maxwell-Boltzmann (classical), Bose-Einstein, and Fermi-Dirac radiation. In the low and high occupancy limits, the maximum entropy closure is bounded by previously known variable Eddington factors that depend only on the flux. For intermediate occupancy, the maximum entropy closure depends on both the occupation density and the flux. The Fermi-Dirac maximum entropy variable Eddington factor shows a scale invariance, which leads to a simple, exact analytic closure for fermions. This two-dimensional variable Eddington factor gives results that agree well with exact (Monte Carlo) neutrino transport calculations out of a collapse residue during early phases of hydrostatic neutron star formation

  5. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  6. Acculturation does not necessarily lead to increased physical activity during leisure time: a cross-sectional study among Turkish young people in the Netherlands

    NARCIS (Netherlands)

    Hosper, Karen; Klazinga, Niek S.; Stronks, Karien

    2007-01-01

    Background: Non-Western migrant populations living in Western countries are more likely to be physically inactive during leisure time than host populations. It is argued that this difference will disappear as they acculturate to the culture of the host country. We explored whether this is also true

  7. A new system of computer-assisted navigation leading to reduction in operating time in uncemented total hip replacement in a matched population.

    Science.gov (United States)

    Chaudhry, Fouad A; Ismail, Sanaa Z; Davis, Edward T

    2018-05-01

    Computer-assisted navigation techniques are used to optimise component placement and alignment in total hip replacement. It has developed in the last 10 years but despite its advantages only 0.3% of all total hip replacements in England and Wales are done using computer navigation. One of the reasons for this is that computer-assisted technology increases operative time. A new method of pelvic registration has been developed without the need to register the anterior pelvic plane (BrainLab hip 6.0) which has shown to improve the accuracy of THR. The purpose of this study was to find out if the new method reduces the operating time. This was a retrospective analysis of comparing operating time in computer navigated primary uncemented total hip replacement using two methods of registration. Group 1 included 128 cases that were performed using BrainLab versions 2.1-5.1. This version relied on the acquisition of the anterior pelvic plane for registration. Group 2 included 128 cases that were performed using the newest navigation software, BrainLab hip 6.0 (registration possible with the patient in the lateral decubitus position). The operating time was 65.79 (40-98) minutes using the old method of registration and was 50.87 (33-74) minutes using the new method of registration. This difference was statistically significant. The body mass index (BMI) was comparable in both groups. The study supports the use of new method of registration in improving the operating time in computer navigated primary uncemented total hip replacements.

  8. Post optimization paradigm in maximum 3-satisfiability logic programming

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.

  9. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  10. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  11. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  12. Explanatory factors of Work-Life Balance and time management leading to the well being in the vision of paranaian accountants1

    OpenAIRE

    Stella Maris Lima Altoé; Simone Bernardes Voese

    2018-01-01

    Recently, work-life balance has been focus of debates that aim to approach on the integration between the different domains work and family. The discussions seek to reduce the role conflicts inherent in these spheres. In certain professions, due to the intense work, one observes the intensification of the conflicts of roles. This fact shows that the accounting profession presents specific demands in certain periods, so the professional is overwhelmed with tasks and needs to manage his time pr...

  13. Explanatory factors of Work-Life Balance and time management leading to the well being in the vision of paranaian accountants1

    Directory of Open Access Journals (Sweden)

    Stella Maris Lima Altoé

    2018-01-01

    Full Text Available Recently, work-life balance has been focus of debates that aim to approach on the integration between the different domains work and family. The discussions seek to reduce the role conflicts inherent in these spheres. In certain professions, due to the intense work, one observes the intensification of the conflicts of roles. This fact shows that the accounting profession presents specific demands in certain periods, so the professional is overwhelmed with tasks and needs to manage his time properly. Thus, the study aims to identify which are the factors that explain the perception of accountants from Paraná regarding their work-life balance and time management. Through an online survey, from the research instrument adapted from the study of Wong and Ko (2009, 267 registered registries in the state of Paraná answered the questionnaire. To analyze the data, we used descriptive statistics and factor analysis. From the factor analysis technique, three factors were identified as explanatory of work-life balance: (1 work support; (2 commitment to work; and (3 commitment to family and personal aspects. In addition to the three work-life balance factors identified, time management was considered. The results of this research are in line with the findings of Wong and Ko (2009. As a scientific contribution, this study enables relevant discussions on aspects related to the quality of life and the performance of accounting professionals.

  14. Combined application of sub-toxic level of silver nanoparticles with low powers of 2450 MHz microwave radiation lead to kill Escherichia coli in a short time

    Directory of Open Access Journals (Sweden)

    Bardia Varastehmoradi

    2013-09-01

    Full Text Available   Objective(s: Electromagnetic radiations which have lethal effects on the living cells are currently also considered as a disinfective physical agent.   Materials and Methods: In this investigation, silver nanoparticles were applied to enhance the lethal action of low powers (100 and 180 W of 2450 MHZ electromagnetic radiation especially against Escherichia coli ATCC 8739. Silver nanoparticles were biologically prepared and used for next experiments. Sterile normal saline solution was prepared and supplemented by silver nanoparticles to reach the sub-inhibitory concentration (6.25 μg/mL. Such diluted silver colloid as well as free-silver nanoparticles solution was inoculated along with test microorganisms, particularly E. coli. These suspensions were separately treated by 2450 MHz electromagnetic radiation for different time intervals in a microwave oven operated at low powers (100 W and 180 W. The viable counts of bacteria before and after each radiation time were determined by colony-forming unit (CFU method. Results: Results showed that the addition of silver nanoparticles significantly decreased the required radiation time to kill vegetative forms of microorganisms. However, these nanoparticles had no combined effect with low power electromagnetic radiation when used against Bacillus subtilis spores. Conclusion: The cumulative effect of silver nanoparticles and low powers electromagnetic radiation may be useful in medical centers to reduce contamination in polluted derange and liquid wastes materials and some devices.

  15. Release of lead from bone in pregnancy and lactation

    International Nuclear Information System (INIS)

    Manton, W.I.; Angle, C.R.; Stanek, K.L.; Kuntzelman, D.; Reese, Y.R.; Kuehnemann, T.J.

    2003-01-01

    Concentrations and isotope ratios of lead in blood, urine, 24-h duplicate diets, and hand wipes were measured for 12 women from the second trimester of pregnancy until at least 8 months after delivery. Six bottle fed and six breast fed their infants. One bottle feeder fell pregnant for a second time, as did a breast feeder, and each was followed semicontinuously for totals of 44 and 54 months, respectively. Bone resorption rather than dietary absorption controls changes in blood lead, but in pregnancy the resorption of trabecular and cortical bone are decoupled. In early pregnancy, only trabecular bone (presumably of low lead content) is resorbed, causing blood leads to fall more than expected from hemodilution alone. In late pregnancy, the sites of resorption move to cortical bone of higher lead content and blood leads rise. In bottle feeders, the cortical bone contribution ceases immediately after delivery, but any tendency for blood leads to fall may be compensated by the effect of hemoconcentration produced by the postpartum loss of plasma volume. In lactation, the whole skeleton undergoes resorption and the blood leads of nursing mothers continue to rise, reaching a maximum 6-8 months after delivery. Blood leads fall from pregnancy to pregnancy, implying that the greatest risk of lead toxicity lies with first pregnancies

  16. Real-Time Study of the Interaction between G-Rich DNA Oligonucleotides and Lead Ion on DNA Tetrahedron-Functionalized Sensing Platform by Dual Polarization Interferometry.

    Science.gov (United States)

    Wang, Shuang; Lu, Shasha; Zhao, Jiahui; Huang, Jianshe; Yang, Xiurong

    2017-11-29

    G-quadruplex plays roles in numerous physiological and pathological processes of organisms. Due to the unique properties of G-quadruplex (e.g., forming G4/hemin complexes with catalytic activity and electron acceptability, binding with metal ions, proteins, fluorescent ligands, and so on), it has been widely applied in biosensing. But the formation process of G-quadruplex is not yet fully understood. Here, a DNA tetrahedron platform with higher reproducibility, regenerative ability, and time-saving building process was coupled with dual polarization interferometry technique for the real-time and label-free investigation of the specific interaction process of guanine-rich singled-stranded DNA (G-rich ssDNA) and Pb 2+ . The oriented immobilization of probes greatly decreased the spatial hindrance effect and improved the accessibility of the probes to the Pb 2+ ions. Through real-time monitoring of the whole formation process of the G-quadruplex, we speculated that the probes on the tetrahedron platform initially stood on the sensing surface with a random coil conformation, then the G-rich ssDNA preliminarily formed unstable G-quartets by H-bonding and cation binding, subsequently forming a completely folded and stable quadruplex structure through relatively slow strand rearrangements. On the basis of these studies, we also developed a novel sensing platform for the specific and sensitive determination of Pb 2+ and its chelating agent ethylenediaminetetraacetic acid. This study not only provides a proof-of-concept for conformational dynamics of G-quadruplex-related drugs and pathogenes, but also enriches the biosensor tools by combining nanomaterial with interfaces technique.

  17. Real time control of fully non-inductive operation in Tore Supra leading to 6 minutes, 1 giga-joule plasma discharges

    International Nuclear Information System (INIS)

    Van Houtte, D.; Martin, G.; Becoulet, A.; Saoutic, B.

    2004-01-01

    The experimental programme of Tore Supra (a = 0.72 m, R = 2.4 m, I p T < 4.5 T) has been devoted in 2003 to study simultaneously heat removal capability and particle exhaust in steady-state fully non-inductive current drive discharges. This required both advanced technology integration and steady-state real time plasma control. In particular, an improvement of the plasma position within a few millimetre range, and new real time cross controls between radio frequency (RF) power and various actuators built around a shared memory network, have allowed Tore Supra to access a powerful steady-state regime with an improved safety level for the actively cooled plasma facing components. Feedback controlled fully non-inductive plasma discharges have been sustained in a steady-state regime up to 6 minutes with a new world record of injected-extracted energy exceeding 1 GJ. Advanced tools, experimental results and brief physics analysis of these discharges are presented and discussed. (authors)

  18. Real time control of fully non-inductive operation in Tore Supra leading to 6 minutes, 1 giga-joule plasma discharges

    Energy Technology Data Exchange (ETDEWEB)

    Van Houtte, D.; Martin, G.; Becoulet, A.; Saoutic, B

    2004-07-01

    The experimental programme of Tore Supra (a = 0.72 m, R = 2.4 m, I{sub p} < 1.7 MA, B{sub T} < 4.5 T) has been devoted in 2003 to study simultaneously heat removal capability and particle exhaust in steady-state fully non-inductive current drive discharges. This required both advanced technology integration and steady-state real time plasma control. In particular, an improvement of the plasma position within a few millimetre range, and new real time cross controls between radio frequency (RF) power and various actuators built around a shared memory network, have allowed Tore Supra to access a powerful steady-state regime with an improved safety level for the actively cooled plasma facing components. Feedback controlled fully non-inductive plasma discharges have been sustained in a steady-state regime up to 6 minutes with a new world record of injected-extracted energy exceeding 1 GJ. Advanced tools, experimental results and brief physics analysis of these discharges are presented and discussed. (authors)

  19. Mechanism of lead removal by waste materials

    International Nuclear Information System (INIS)

    Qaiser, S.; Saleemi, A.R.; Ahmed, M.M.

    2007-01-01

    Heavy metal ions are priority pollutants, due to their toxicity and mobility in natural water ecosystems. The discharge of heavy metals into aquatic ecosystems has become a matter of concern in Pakistan over the last few decades. These contaminants are introduced into the aquatic systems significantly as a result of various industrial operations. The metals of concern include lead, chromium, zinc, copper, nickel and uranium. Lead is one of the most hazardous and toxic metals. It is used as industrial raw material in the manufacture of storage batteries, pigments, leaded glass, fuels, photographic materials, matches and explosives. Conventional methods for treatment of dissolved lead include precipitation, adsorption, coagulation/notation, sedimentation, reverse osmosis and ion exchange. Each process has its merits and limitations in applications. Adsorption by activated carbon and ion exchange using commercial ion exchange resins are very expensive processes, especially for a developing country like Pakistan. The present research was conducted to identify some waste materials, which can be utilized to remove lead from industrial wastewater. Natural wastes in the form of leaves and ash have considerable amounts of CaO, MgO, Na/sub 2/O, SiO/sub 2/ and Al/sub 2/O/sub 3/ which can be utilized for precipitation and adsorption. Utilization of waste materials to remove lead from industrial wastewater is the basic theme of this research. The waste materials used in this research were maple leaves, pongamia pinata leaves, coal ash and maple ago leave ash. Parameters studied were reaction time, precipitant dose, pH and temperature. It was found that maple leaves ash has maximum lead removal capacity 19.24 mg g/sup -1/ followed by coal ash 13.2 mg g/sup -1/. The optimal pH was 5 for maple leaves and pongamia Pinata leaves; and 4 for coal ash and maple leaves ash. Removal capacity decreased with increase in temperature. The major removal mechanisms were adsorption and

  20. Climate processes shape the evolution of populations and species leading to the assembly of modern biotas - examples along a continuum from shallow to deep time

    Science.gov (United States)

    Jacobs, D. K.

    2014-12-01

    California experiences droughts, so lets begin with the effects of streamflow variation on population evolution in a coastal lagoon-specialist endangered fish, the tidewater goby. Streamflow controls the closing and opening of lagoons to the sea determining genetic isolation or gene flow. Here evolution is a function of habitat preference for closing lagoons. Other estuarine fishes, with different habitat preferences, differentiate at larger spatial scales in response to longer glacio-eustatic control of estuarine habitat. Species of giraffes in Africa are a puzzle. Why do the ranges of large motile, potentially interbreeding, species occur in contact each other without hybridization? The answer resides in the timing of seasonal precipitation. Although the degree of seaonality of climate does not vary much between species, the timing of precipitation and seasonal "greenup" does. This provides a selective advantage to reproductive isolation, as reproductive timing can be coordinated in each region with seasonal browse availability for lactating females. Convective rainfall in Africa follows the sun and solar intensity is influenced by the precession cycle such that more extensive summer rains fell across the Sahara and South Asia early in the Holocene, this may also contribute to the genetic isolation and speciation of giraffes and others savanna species. But there also appears to be a correlation with rarity (CITES designation) of modern wetland birds, as the dramatic drying of the late Holocene landscape contributes to this conservation concern. Turning back to the West Coast we find the most diverse temperate coastal fauna in the world, yet this diversity evolved and is a relict of diversity accumulation during the apex of upwelling in the late Miocene, driven by the reglaciation of Antarctica. Lastly we can see that the deep-sea evolution is broadly constrained by the transitions from greenhouse to icehouse worlds over the last 90 mya as broad periods of warm

  1. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  2. Maximum entropy production rate in quantum thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible

  3. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  4. Invited Commentary: Little Steps Lead to Huge Steps-It's Time to Make Physical Inactivity Our Number 1 Public Health Enemy.

    Science.gov (United States)

    Church, Timothy S

    2016-11-01

    The analysis plan and article in this issue of the Journal by Evenson et al. (Am J Epidemiol 2016;184(9):621-632) is well-conceived, thoughtfully conducted, and tightly written. The authors utilized the National Health and Nutrition Examination Survey data set to examine the association between accelerometer-measured physical activity level and mortality and found that meeting the 2013 federal Physical Activity Guidelines resulted in a 35% reduction in risk of mortality. The timing of these findings could not be better, given the ubiquitous nature of personal accelerometer devices. The masses are already equipped to routinely quantify their activity, and now we have the opportunity and responsibility to provide evidenced-based, tailored physical activity goals. We have evidenced-based physical activity guidelines, mass distribution of devices to track activity, and now scientific support indicating that meeting the physical activity goal, as assessed by these devices, has substantial health benefits. All of the pieces are in place to make physical inactivity a national priority, and we now have the opportunity to positively affect the health of millions of Americans. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  6. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  7. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  8. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  9. Fractal Dimension and Maximum Sunspot Number in Solar Cycle

    Directory of Open Access Journals (Sweden)

    R.-S. Kim

    2006-09-01

    Full Text Available The fractal dimension is a quantitative parameter describing the characteristics of irregular time series. In this study, we use this parameter to analyze the irregular aspects of solar activity and to predict the maximum sunspot number in the following solar cycle by examining time series of the sunspot number. For this, we considered the daily sunspot number since 1850 from SIDC (Solar Influences Data analysis Center and then estimated cycle variation of the fractal dimension by using Higuchi's method. We examined the relationship between this fractal dimension and the maximum monthly sunspot number in each solar cycle. As a result, we found that there is a strong inverse relationship between the fractal dimension and the maximum monthly sunspot number. By using this relation we predicted the maximum sunspot number in the solar cycle from the fractal dimension of the sunspot numbers during the solar activity increasing phase. The successful prediction is proven by a good correlation (r=0.89 between the observed and predicted maximum sunspot numbers in the solar cycles.

  10. How long do centenarians survive? Life expectancy and maximum lifespan.

    Science.gov (United States)

    Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A

    2017-08-01

    The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  11. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  12. Spatio-temporal observations of the tertiary ozone maximum

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2009-07-01

    Full Text Available We present spatio-temporal distributions of the tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at an altitude of ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time to obtain spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.

    The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of the tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.

    Since ozone in the mesosphere is very sensitive to HOx concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HOx enhancement from the increased ionization.

  13. Global CO2 rise leads to reduced maximum stomatal conductance in Florida vegetation

    NARCIS (Netherlands)

    Lammertsma, E.I.; de Boer, H.J.; Dekker, S.C.; Dilcher, D.L.; Lotter, A.F.; Wagner-Cremer, F.

    2011-01-01

    A principle response of C3 plants to increasing concentrations of atmospheric CO2 (CO2) is to reduce transpirational water loss by decreasing stomatal conductance (gs) and simultaneously increase assimilation rates. Via this adaptation, vegetation has the ability to alter hydrology and climate.

  14. Gas cooled leads

    International Nuclear Information System (INIS)

    Shutt, R.P.; Rehak, M.L.; Hornik, K.E.

    1993-01-01

    The intent of this paper is to cover as completely as possible and in sufficient detail the topics relevant to lead design. The first part identifies the problems associated with lead design, states the mathematical formulation, and shows the results of numerical and analytical solutions. The second part presents the results of a parametric study whose object is to determine the best choice for cooling method, material, and geometry. These findings axe applied in a third part to the design of high-current leads whose end temperatures are determined from the surrounding equipment. It is found that cooling method or improved heat transfer are not critical once good heat exchange is established. The range 5 5 but extends over a large of values. Mass flow needed to prevent thermal runaway varies linearly with current above a given threshold. Below that value, the mass flow is constant with current. Transient analysis shows no evidence of hysteresis. If cooling is interrupted, the mass flow needed to restore the lead to its initially cooled state grows exponentially with the time that the lead was left without cooling

  15. Direct maximum parsimony phylogeny reconstruction from genotype data.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-12-05

    Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  16. The "neuro-mapping locator" software. A real-time intraoperative objective paraesthesia mapping tool to evaluate paraesthesia coverage of the painful zone in patients undergoing spinal cord stimulation lead implantation.

    Science.gov (United States)

    Guetarni, F; Rigoard, P

    2015-03-01

    Conventional spinal cord stimulation (SCS) generates paraesthesia, as the efficacy of this technique is based on the relationship between the paraesthesia provided by SCS on the painful zone and an analgesic effect on the stimulated zone. Although this basic postulate is based on clinical evidence, it is clear that this relationship has never been formally demonstrated by scientific studies. There is a need for objective evaluation tools ("transducers") to transpose electrical signals to clinical effects and to guide therapeutic choices. We have developed a software at Poitiers University hospital allowing real-time objective mapping of the paraesthesia generated by SCS lead placement and programming during the implantation procedure itself, on a touch screen interface. The purpose of this article is to describe this intraoperative mapping software, in terms of its concept and technical aspects. The Neuro-Mapping Locator (NML) software is dedicated to patients with failed back surgery syndrome, candidates for SCS lead implantation, to actively participate in the implantation procedure. Real-time geographical localization of the paraesthesia generated by percutaneous or multicolumn surgical SCS lead implanted under awake anaesthesia allows intraoperative lead programming and possibly lead positioning to be modified with the patient's cooperation. Software updates should enable us to refine objectives related to the use of this tool and minimize observational biases. The ultimate goals of NML software should not be limited to optimize one specific device implantation in a patient but also allow to compare instantaneously various stimulation strategies, by characterizing new technical parameters as "coverage efficacy" and "device specificity" on selected subgroups of patients. Another longer-term objective would be to organize these predictive factors into computer science ontologies, which could constitute robust and helpful data for device selection and programming

  17. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.

    Science.gov (United States)

    Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz

    2017-10-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.

  18. Determing and monitoring of maximum permissible power for HWRR-3

    International Nuclear Information System (INIS)

    Jia Zhanli; Xiao Shigang; Jin Huajin; Lu Changshen

    1987-01-01

    The operating power of a reactor is an important parameter to be monitored. This report briefly describes the determining and monitoring of maximum permissiable power for HWRR-3. The calculating method is described, and the result of calculation and analysis of error are also given. On-line calculation and real time monitoring have been realized at the heavy water reactor. It provides the reactor with a real time and reliable supervision. This makes operation convenient and increases reliability

  19. Contaminated lead environments of man: reviewing the lead isotopic evidence in sediments, peat, and soils for the temporal and spatial patterns of atmospheric lead pollution in Sweden.

    Science.gov (United States)

    Bindler, Richard

    2011-08-01

    Clair Patterson and colleagues demonstrated already four decades ago that the lead cycle was greatly altered on a global scale by humans. Moreover, this change occurred long before the implementation of monitoring programs designed to study lead and other trace metals. Patterson and colleagues also developed stable lead isotope analyses as a tool to differentiate between natural and pollution-derived lead. Since then, stable isotope analyses of sediment, peat, herbaria collections, soils, and forest plants have given us new insights into lead biogeochemical cycling in space and time. Three important conclusions from our studies of lead in the Swedish environment conducted over the past 15 years, which are well supported by extensive results from elsewhere in Europe and in North America, are: (1) lead deposition rates at sites removed from major point sources during the twentieth century were about 1,000 times higher than natural background deposition rates a few thousand years ago (~10 mg Pb m(-2) year(-1) vs. 0.01 mg Pb m(-2) year(-1)), and even today (~1 mg Pb m(-2) year(-1)) are still almost 100 times greater than natural rates. This increase from natural background to maximum fluxes is similar to estimated changes in body burdens of lead from ancient times to the twentieth century. (2) Stable lead isotopes ((206)Pb/(207)Pb ratios shown in this paper) are an effective tool to distinguish anthropogenic lead from the natural lead present in sediments, peat, and soils for both the majority of sites receiving diffuse inputs from long range and regional sources and for sites in close proximity to point sources. In sediments >3,500 years and in the parent soil material of the C-horizon, (206)Pb/(207)Pb ratios are higher, 1.3 to >2.0, whereas pollution sources and surface soils and peat have lower ratios that have been in the range 1.14-1.18. (3) Using stable lead isotopes, we have estimated that in southern Sweden the cumulative anthropogenic burden of

  20. CMS lead tungstate crystals

    CERN Multimedia

    Laurent Guiraud

    2000-01-01

    These crystals are made from lead tungstate, a crystal that is as clear as glass yet with nearly four times the density. They have been produced in Russia to be used as scintillators in the electromagnetic calorimeter on the CMS experiment, part of the LHC project at CERN. When an electron, positron or photon passes through the calorimeter it will cause a cascade of particles that will then be absorbed by these scintillating crystals, allowing the particle's energy to be measured.

  1. Lead time reduction by optimal test sequencing

    NARCIS (Netherlands)

    Boumen, R.; Jong, de I.S.M.

    2005-01-01

    Het testen van machines neemt in het huidige ontwerp en productie fases binnen ASML ongeveer 30-50% van de totale doorlooptijd in. Om deze tijd te verkorten, worden er test strategieën gemaakt. Een van de belangrijke onderdelen van zo een test strategie is de test volgorde, oftewel de volgorde

  2. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  3. Maximum vehicle cabin temperatures under different meteorological conditions

    Science.gov (United States)

    Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John

    2009-05-01

    A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.

  4. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  5. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  6. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  7. Shower maximum detector for SDC calorimetry

    International Nuclear Information System (INIS)

    Ernwein, J.

    1994-01-01

    A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

  8. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  9. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  10. Bromine pretreated chitosan for adsorption of lead (II) from water

    Indian Academy of Sciences (India)

    isotherm and maximum sorption capacity of 30% bromine pretreated chitosan sorbent was 1·755 g/kg with 85–. 90% lead .... C by applying Lagergren first ... Lead (II) ions concentrations were determined by using an .... following equation.

  11. Lead Thickness Measurements

    International Nuclear Information System (INIS)

    Rucinski, R.

    1998-01-01

    The preshower lead thickness applied to the outside of D-Zero's superconducting solenoid vacuum shell was measured at the time of application. This engineering documents those thickness measurements. The lead was ordered in sheets 0.09375-inch and 0.0625-inch thick. The tolerance on thickness was specified to be +/- 0.003-inch. The sheets all were within that thickness tolerance. The nomenclature for each sheet was designated 1T, 1B, 2T, 2B where the numeral designates it's location in the wrap and 'T' or 'B' is short for 'top' or 'bottom' half of the solenoid. Micrometer measurements were taken at six locations around the perimeter of each sheet. The width,length, and weight of each piece was then measured. Using an assumed pure lead density of 0.40974 lb/in 3 , an average sheet thickness was calculated and compared to the perimeter thickness measurements. In every case, the calculated average thickness was a few mils thinner than the perimeter measurements. The ratio was constant, 0.98. This discrepancy is likely due to the assumed pure lead density. It is not felt that the perimeter is thicker than the center regions. The data suggests that the physical thickness of the sheets is uniform to +/- 0.0015-inch.

  12. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  13. Maximum speed of dewetting on a fiber

    NARCIS (Netherlands)

    Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed

  14. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  15. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  16. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  17. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  18. The maximum-entropy method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš; Schneider, M.

    2003-01-01

    Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003

  19. Achieving maximum sustainable yield in mixed fisheries

    NARCIS (Netherlands)

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna

    2017-01-01

    Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

  20. 5 CFR 534.203 - Maximum stipends.

    Science.gov (United States)

    2010-01-01

    ... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...

  1. Design of a wind turbine rotor for maximum aerodynamic efficiency

    DEFF Research Database (Denmark)

    Johansen, Jeppe; Aagaard Madsen, Helge; Gaunaa, Mac

    2009-01-01

    The design of a three-bladed wind turbine rotor is described, where the main focus has been highest possible mechanical power coefficient, CP, at a single operational condition. Structural, as well as off-design, issues are not considered, leading to a purely theoretical design for investigating...... maximum aerodynamic efficiency. The rotor is designed assuming constant induction for most of the blade span, but near the tip region, a constant load is assumed instead. The rotor design is obtained using an actuator disc model, and is subsequently verified using both a free-wake lifting line method...

  2. Safe leads and lead changes in competitive team sports

    Science.gov (United States)

    Clauset, A.; Kogan, M.; Redner, S.

    2015-06-01

    We investigate the time evolution of lead changes within individual games of competitive team sports. Exploiting ideas from the theory of random walks, the number of lead changes within a single game follows a Gaussian distribution. We show that the probability that the last lead change and the time of the largest lead size are governed by the same arcsine law, a bimodal distribution that diverges at the start and at the end of the game. We also determine the probability that a given lead is "safe" as a function of its size L and game time t . Our predictions generally agree with comprehensive data on more than 1.25 million scoring events in roughly 40 000 games across four professional or semiprofessional team sports, and are more accurate than popular heuristics currently used in sports analytics.

  3. Lead levels - blood

    Science.gov (United States)

    Blood lead levels ... is used to screen people at risk for lead poisoning. This may include industrial workers and children ... also used to measure how well treatment for lead poisoning is working. Lead is common in the ...

  4. Lead Poisoning Prevention Tips

    Science.gov (United States)

    ... or removed safely. How are children exposed to lead? Lead-based paint and lead contaminated dust are ... What can be done to prevent exposure to lead? It is important to determine the construction year ...

  5. Estimating the maximum potential revenue for grid connected electricity storage :

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.

    2012-12-01

    The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the

  6. Involving Lead Users in Innovation

    DEFF Research Database (Denmark)

    Brem, Alexander; Bilgram, Volker; Gutstein, Adele

    2018-01-01

    Research on the lead user method has been conducted for more than thirty years and has shown that the method is more likely to generate breakthrough innovation than traditional market research tools. Based on a systematic literature review, this paper shows a detailed view on the broad variety...... of research on lead user characteristics, lead user processes, lead user identification and application, and success factors. The main challenge of the lead user method as identified in literature is the resource issue regarding time, manpower, and costs. Also, internal acceptance and the processing...... of the method have been spotted in literature, as well as the intellectual property protection issue. From the starting point of the initial lead user method process introduced by Lüthje and Herstatt (2004), results are integrated into a revisited view on the lead user method process. In addition, concrete...

  7. In vivo x-ray fluorescence of bone lead in the study of human lead metabolism: Serum lead, whole blood lead, bone lead, and cumulative exposure

    International Nuclear Information System (INIS)

    Cake, K.M.; Chettle, D.R.; Webber, C.E.; Gordon, C.L.

    1995-01-01

    Traditionally, clinical studies of lead's effect on health have relied on blood lead levels to indicate lead exposure. However, this is unsatisfactory because blood lead levels have a half-life of approximately 5 weeks, and thus reflect recent exposure. Over 90% of the lead body burden is in bone, and it is thought to have a long residence time, thus implying that measurements of bone lead reflect cumulative exposure. So, measurements of bone lead are useful in understanding the long-term health effects of lead. Ahlgren reported the first noninvasive measurements of bone lead in humans, where γ-rays from 57 Co were used to excite the K series x-rays of lead. The lead detection system at McMaster University uses a 109 Cd source which is positioned at the center of the detector face (HPGe) and a near backscatter (∼160 degrees) geometry. This arrangement allows great flexibility, since one can sample lead in a range of different bone sites due to a robust normalization technique which eliminates the need to correct for bone geometry, thickness of overlying tissue, and other related factors. The effective radiation dose to an adult during an x-ray fluorescence bone lead measurement is extremely low, being 35 nSv. This paper addresses the issue of how bone, whole blood, and serum lead concentrations can be related in order to understand a person's lead exposure history

  8. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  9. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  10. On the maximum of wave surface of sea waves

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B

    1980-01-01

    This article considers wave surface as a normal stationary random process to solve the estimation of the maximum of wave surface in a given time interval by means of the theoretical results of probability theory. The results are represented by formulas (13) to (19) in this article. It was proved in this article that when time interval approaches infinite, the formulas (3), (6) of E )eta max) that were derived from the references (Cartwright, Longuet-Higgins) can also be derived by asymptotic distribution of the maximum of wave surface provided by the article. The advantage of the results obtained from this point of view as compared with the results obtained from the references was discussed.

  11. Maximum-confidence discrimination among symmetric qudit states

    International Nuclear Information System (INIS)

    Jimenez, O.; Solis-Prosser, M. A.; Delgado, A.; Neves, L.

    2011-01-01

    We study the maximum-confidence (MC) measurement strategy for discriminating among nonorthogonal symmetric qudit states. Restricting to linearly dependent and equally likely pure states, we find the optimal positive operator valued measure (POVM) that maximizes our confidence in identifying each state in the set and minimizes the probability of obtaining inconclusive results. The physical realization of this POVM is completely determined and it is shown that after an inconclusive outcome, the input states may be mapped into a new set of equiprobable symmetric states, restricted, however, to a subspace of the original qudit Hilbert space. By applying the MC measurement again onto this new set, we can still gain some information about the input states, although with less confidence than before. This leads us to introduce the concept of sequential maximum-confidence (SMC) measurements, where the optimized MC strategy is iterated in as many stages as allowed by the input set, until no further information can be extracted from an inconclusive result. Within each stage of this measurement our confidence in identifying the input states is the highest possible, although it decreases from one stage to the next. In addition, the more stages we accomplish within the maximum allowed, the higher will be the probability of correct identification. We will discuss an explicit example of the optimal SMC measurement applied in the discrimination among four symmetric qutrit states and propose an optical network to implement it.

  12. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  13. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  14. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  15. Maximum parsimony on subsets of taxa.

    Science.gov (United States)

    Fischer, Mareike; Thatte, Bhalchandra D

    2009-09-21

    In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.

  16. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  17. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  18. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  19. Maximum Safety Regenerative Power Tracking for DC Traction Power Systems

    Directory of Open Access Journals (Sweden)

    Guifu Du

    2017-02-01

    Full Text Available Direct current (DC traction power systems are widely used in metro transport systems, with running rails usually being used as return conductors. When traction current flows through the running rails, a potential voltage known as “rail potential” is generated between the rails and ground. Currently, abnormal rises of rail potential exist in many railway lines during the operation of railway systems. Excessively high rail potentials pose a threat to human life and to devices connected to the rails. In this paper, the effect of regenerative power distribution on rail potential is analyzed. Maximum safety regenerative power tracking is proposed for the control of maximum absolute rail potential and energy consumption during the operation of DC traction power systems. The dwell time of multiple trains at each station and the trigger voltage of the regenerative energy absorbing device (READ are optimized based on an improved particle swarm optimization (PSO algorithm to manage the distribution of regenerative power. In this way, the maximum absolute rail potential and energy consumption of DC traction power systems can be reduced. The operation data of Guangzhou Metro Line 2 are used in the simulations, and the results show that the scheme can reduce the maximum absolute rail potential and energy consumption effectively and guarantee the safety in energy saving of DC traction power systems.

  20. An Efficient Algorithm for the Maximum Distance Problem

    Directory of Open Access Journals (Sweden)

    Gabrielle Assunta Grün

    2001-12-01

    Full Text Available Efficient algorithms for temporal reasoning are essential in knowledge-based systems. This is central in many areas of Artificial Intelligence including scheduling, planning, plan recognition, and natural language understanding. As such, scalability is a crucial consideration in temporal reasoning. While reasoning in the interval algebra is NP-complete, reasoning in the less expressive point algebra is tractable. In this paper, we explore an extension to the work of Gerevini and Schubert which is based on the point algebra. In their seminal framework, temporal relations are expressed as a directed acyclic graph partitioned into chains and supported by a metagraph data structure, where time points or events are represented by vertices, and directed edges are labelled with < or ≤. They are interested in fast algorithms for determining the strongest relation between two events. They begin by developing fast algorithms for the case where all points lie on a chain. In this paper, we are interested in a generalization of this, namely we consider the problem of finding the maximum ``distance'' between two vertices in a chain ; this problem arises in real world applications such as in process control and crew scheduling. We describe an O(n time preprocessing algorithm for the maximum distance problem on chains. It allows queries for the maximum number of < edges between two vertices to be answered in O(1 time. This matches the performance of the algorithm of Gerevini and Schubert for determining the strongest relation holding between two vertices in a chain.

  1. Fractal cluster modeling of the fatigue behavior of lead zirconate titanate

    OpenAIRE

    Priya, Shashank; Kim, Hyeoung Woo; Ryu, Jungho; Uchino, Kenji; Viehland, Dwight D.

    2002-01-01

    The fatigue behavior of lead zirconate titanate ceramics (PZT) has been studied under electrical and mechanical drives. Piezoelectric fatigue was studied using a mechanical method. Under ac mechanical drive, hard and soft PZTs showed an increase in the longitudinal piezoelectric constant at short times, reaching a maximum at intermediate times. Systematic investigations were performed to characterize the electrical fatigue behavior. A decrease in the magnitude of the remanent polarization was...

  2. Paralisia unilateral de prega vocal: associação e correlação entre tempos máximos de fonação, posição e ângulo de afastamento Unilateral vocal fold paralysis: association and correlation between maximum phonation time, position and displacement angle

    Directory of Open Access Journals (Sweden)

    Luciane M. Steffen

    2004-08-01

    Full Text Available A paralisia de prega vocal (PPV decorre da lesão do nervo vago ou de seus ramos, podendo levar a alterações das funções que requerem o fechamento glótico. O tempo máximo de fonação (TMF é um teste aplicado rotineiramente em pacientes disfônicospara avaliar a eficiência glótica e freqüentemente utilizado em casos de PPV, cujos valores encontram-se diminuídos. A classificação clínica clássica da posição da prega vocal paralisada em mediana, para-mediana, intermediária e em abdução ou cadavérica tem sido objeto de controvérsias. OBJETIVO: Verificar a associação e correlação entre os TMF e posição da prega vocal paralisada (PVP, TMF e ângulo de afastamento da PVP, medir o ângulo de afastamento da linha média das diferentes posições da PVP e correlacioná-lo com a sua classificação clínica FORMA DE ESTUDO: Clínico retrospectivo. MATERIAL E MÉTODO: Foram revisados os prontuários e analisados os exames videoendoscópicos de 86 indivíduos com paralisia de prega vocal unilateral e medido o ângulo de afastamento da PVP por meio de um programa computadorizado. RESULTADOS: A associação e correlação entre os TMF em cada posição assumida pela PVP têm significância estatística somente para /z/ na posição mediana. A associação e correlação entre TMF com ângulo de afastamento da PVP guardam relação para /i/, /u/. Ao associar e correlacionar medidas de ângulo com posição observa-se significância estatística em posição de abdução. CONCLUSÕES: Neste estudo não foi possível determinar as posições assumidas pela PVP por meio dos TMF nem correlacioná-las com medidas do ângulo.Vocal fold paralysis (VFP is due to an injury of the vagus nerve or one of its branches and may cause dysfunctions in the glottic competence. The Maximum Phonation Time (MPT is a test usually applied on dysphonic patients to assess glottic efficiency, mainly in patients with VFP and a decreased phonation time. The

  3. Direct maximum parsimony phylogeny reconstruction from genotype data

    Directory of Open Access Journals (Sweden)

    Ravi R

    2007-12-01

    Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  4. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method

    International Nuclear Information System (INIS)

    Nasser, Hassan; Cessac, Bruno; Marre, Olivier

    2013-01-01

    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles. (paper)

  5. Sequential and Parallel Algorithms for Finding a Maximum Convex Polygon

    DEFF Research Database (Denmark)

    Fischer, Paul

    1997-01-01

    This paper investigates the problem where one is given a finite set of n points in the plane each of which is labeled either ?positive? or ?negative?. We consider bounded convex polygons, the vertices of which are positive points and which do not contain any negative point. It is shown how...... such a polygon which is maximal with respect to area can be found in time O(n³ log n). With the same running time one can also find such a polygon which contains a maximum number of positive points. If, in addition, the number of vertices of the polygon is restricted to be at most M, then the running time...... becomes O(M n³ log n). It is also shown how to find a maximum convex polygon which contains a given point in time O(n³ log n). Two parallel algorithms for the basic problem are also presented. The first one runs in time O(n log n) using O(n²) processors, the second one has polylogarithmic time but needs O...

  6. Modelling information flow along the human connectome using maximum flow.

    Science.gov (United States)

    Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung

    2018-01-01

    The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Lead (Pb) Air Pollution

    Science.gov (United States)

    ... Regional Offices Labs and Research Centers Lead (Pb) Air Pollution Contact Us Share As a result of EPA's ... and protect aquatic and terrestrial ecosystems. Lead (Pb) Air Pollution Basic Information How does lead get in the ...

  8. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  9. Maximum power operation of interacting molecular motors

    DEFF Research Database (Denmark)

    Golubeva, Natalia; Imparato, Alberto

    2013-01-01

    , as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...

  10. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  11. On the maximum drawdown during speculative bubbles

    Science.gov (United States)

    Rotundo, Giulia; Navarra, Mauro

    2007-08-01

    A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.

  12. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  13. Conductivity maximum in a charged colloidal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Bastea, S

    2009-01-27

    Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.

  14. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  15. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  16. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  17. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  18. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  19. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  20. Removal of Lead Hydroxides Complexes from Solutions Formed in Silver/Gold: Cyanidation Process

    Science.gov (United States)

    Parga, José R.; Martinez, Raul Flores; Moreno, Hector; Gomes, Andrew Jewel; Cocke, David L.

    2014-04-01

    The presence of lead hydroxides in "pregnant cyanide solution" decreases the quality of the Dore obtained in the recovery processes of gold and silver, so it is convenient to remove them. The adsorbent capacity of the low cost cow bone powder was investigated for the removal of lead ions from a solution of lead hydroxide complexes at different initial metal ion concentrations (10 to 50 mg/L), and reaction time. Experiments were carried out in batches. The maximum sorption capacity of lead determined by the Langmuir model was found to be 126.58 mg/g, and the separation factor R L was between 0 and 1, indicating a significant affinity of bone for lead. Experimental data follow pseudo-second order kinetics suggesting chemisorption. It is concluded that cow bone powder can be successfully used for the removal of lead ions, and improves the quality of the silver-gold cyanides precipitate.

  1. Lead concentration in meat from lead-killed moose and predicted human exposure using Monte Carlo simulation.

    Science.gov (United States)

    Lindboe, M; Henrichsen, E N; Høgåsen, H R; Bernhoft, A

    2012-01-01

    Lead-based hunting ammunitions are still common in most countries. On impact such ammunition releases fragments which are widely distributed within the carcass. In Norway, wild game is an important meat source for segments of the population and 95% of hunters use lead-based bullets. In this paper, we have investigated the lead content of ground meat from moose (Alces alces) intended for human consumption in Norway, and have predicted human exposure through this source. Fifty-two samples from different batches of ground meat from moose killed with lead-based bullets were randomly collected. The lead content was measured by atomic absorption spectroscopy. The lead intake from exposure to moose meat over time, depending on the frequency of intake and portion size, was predicted using Monte Carlo simulation. In 81% of the batches, lead levels were above the limit of quantification of 0.03 mg kg(-1), ranging up to 110 mg kg(-1). The mean lead concentration was 5.6 mg kg(-1), i.e. 56 times the European Commission limit for lead in meat. For consumers eating a moderate meat serving (2 g kg(-1) bw), a single serving would give a lead intake of 11 µg kg(-1) bw on average, with maximum of 220 µg kg(-1) bw. Using Monte Carlo simulation, the median (and 97.5th percentile) predicted weekly intake of lead from moose meat was 12 µg kg(-1) bw (27 µg kg(-1) bw) for one serving per week and 25 µg kg(-1) bw (45 µg kg(-1) bw) for two servings per week. The results indicate that the intake of meat from big game shot with lead-based bullets imposes a significant contribution to the total human lead exposure. The provisional tolerable weekly intake set by the World Health Organization (WHO) of 25 µg kg(-1) bw is likely to be exceeded in people eating moose meat on a regular basis. The European Food Safety Authority (EFSA) has recently concluded that adverse effects may be present at even lower exposure doses. Hence, even occasional consumption of big game meat with lead levels as

  2. Evolution with time of 12 metals (V, Cr, Mn, Co, Cu, Zn, Ag, Cd, Ba, Pb, Bi and U) and of lead isotopes in the snows of Coats Land (Antarctica) since the 1830's

    International Nuclear Information System (INIS)

    Planchon, F.

    2001-01-01

    This work shows that it is now possible to get reliable data on the occurrence of numerous heavy metals at ultra low levels in Antarctic snow, by combining ultra clean field sampling and laboratory sub-sampling procedures and the use of ultra sensitive analytical techniques such as ICP-SFMS and TIMS. It has allowed us to determine concentrations of twelve metals (V, Cr, Mn, Co, Cu, Zn, Ag, Cd, Ba, Pb, Bi et U) and lead isotopic composition in the ultra clean series of snow samples collected at Coats Land, in the Atlantic sector of Antarctica. This work presents a 150 years record of metal inputs from natural and anthropogenic sources to Antarctica from the 1830's to the early 1990's. Lead atmospheric pollution begins as early as the end of the 19. century, peaks during the 1970's-1980's and then falls sharply during recent decades. Evolution in lead isotopic abundance shows that Pb inputs to Antarctica reflect a complex blend of contributions originating from the Southern part of South America and Australia. For Cr, Cu, Zn, Ag, Bi and U, concentrations in the snow show significant increases from 1950 to 1980. These enhancements which cannot be explained by variations in natural inputs, illustrate that atmospheric pollution for heavy metals linked with anthropogenic activities in the Southern Hemisphere countries such as for example ferrous and non-ferrous metal mining and smelting is really global. Study of the time period 1920-1990, has allowed us to detail short-term (intra and inter annual) heavy metals concentration's changes. The large short-term variability, observed in Coats Land snow, shows the complex patterns of metal inputs to Antarctica, associated for instance to changes in long-range transport processes from mid-latitude to polar zone and to variability in the different natural sources, such local volcanic activity, sea-salt spray or crustal dust inputs. (author)

  3. Computing the stretch factor and maximum detour of paths, trees, and cycles in the normed space

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian; Grüne, Ansgar; Klein, Rolf

    2012-01-01

    (n log n) in the algebraic computation tree model and describe a worst-case O(σn log 2 n) time algorithm for computing the stretch factor or maximum detour of a path embedded in the plane with a weighted fixed orientation metric defined by σ time algorithm to d...... time. We also obtain an optimal O(n) time algorithm for computing the maximum detour of a monotone rectilinear path in L 1 plane....

  4. Einstein-Dirac theory in spin maximum I

    International Nuclear Information System (INIS)

    Crumeyrolle, A.

    1975-01-01

    An unitary Einstein-Dirac theory, first in spin maximum 1, is constructed. An original feature of this article is that it is written without any tetrapod technics; basic notions and existence conditions for spinor structures on pseudo-Riemannian fibre bundles are only used. A coupling gravitation-electromagnetic field is pointed out, in the geometric setting of the tangent bundle over space-time. Generalized Maxwell equations for inductive media in presence of gravitational field are obtained. Enlarged Einstein-Schroedinger theory, gives a particular case of this E.D. theory. E. S. theory is a truncated E.D. theory in spin maximum 1. A close relation between torsion-vector and Schroedinger's potential exists and nullity of torsion-vector has a spinor meaning. Finally the Petiau-Duffin-Kemmer theory is incorporated in this geometric setting [fr

  5. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  6. A Maximum Principle for SDEs of Mean-Field Type

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Daniel, E-mail: danieand@math.kth.se; Djehiche, Boualem, E-mail: boualem@math.kth.se [Royal Institute of Technology, Department of Mathematics (Sweden)

    2011-06-15

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  7. A Maximum Principle for SDEs of Mean-Field Type

    International Nuclear Information System (INIS)

    Andersson, Daniel; Djehiche, Boualem

    2011-01-01

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  8. Maximum total organic carbon limit for DWPF melter feed

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    DWPF recently decided to control the potential flammability of melter off-gas by limiting the total carbon content in the melter feed and maintaining adequate conditions for combustion in the melter plenum. With this new strategy, all the LFL analyzers and associated interlocks and alarms were removed from both the primary and backup melter off-gas systems. Subsequently, D. Iverson of DWPF- T ampersand E requested that SRTC determine the maximum allowable total organic carbon (TOC) content in the melter feed which can be implemented as part of the Process Requirements for melter feed preparation (PR-S04). The maximum TOC limit thus determined in this study was about 24,000 ppm on an aqueous slurry basis. At the TOC levels below this, the peak concentration of combustible components in the quenched off-gas will not exceed 60 percent of the LFL during off-gas surges of magnitudes up to three times nominal, provided that the melter plenum temperature and the air purge rate to the BUFC are monitored and controlled above 650 degrees C and 220 lb/hr, respectively. Appropriate interlocks should discontinue the feeding when one or both of these conditions are not met. Both the magnitude and duration of an off-gas surge have a major impact on the maximum TOC limit, since they directly affect the melter plenum temperature and combustion. Although the data obtained during recent DWPF melter startup tests showed that the peak magnitude of a surge can be greater than three times nominal, the observed duration was considerably shorter, on the order of several seconds. The long surge duration assumed in this study has a greater impact on the plenum temperature than the peak magnitude, thus making the maximum TOC estimate conservative. Two models were used to make the necessary calculations to determine the TOC limit

  9. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  10. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  11. Cryogenic current leads

    Energy Technology Data Exchange (ETDEWEB)

    Zizek, F.

    1982-01-01

    Theoretical, technical and design questions are examined of cryogenic current leads for SP of magnetic systems. Simplified mathematical models are presented for the current leads. To illustrate modeling, the calculation is made of the real current leads for 500 A and three variants of current leads for 1500 A for the enterprise ''Shkoda.''

  12. Lead - nutritional considerations

    Science.gov (United States)

    ... billion people had toxic (poisonous) blood lead levels. Food Sources Lead can be found in canned goods if there is lead solder in the ... to bottled water for drinking and cooking. Avoid canned goods from foreign ... cans goes into effect. If imported wine containers have a lead foil ...

  13. Structural contribution to the ferroelectric fatigue in lead zirconate titanate ceramics

    Science.gov (United States)

    Hinterstein, M.; Rouquette, J.; Haines, J.; Papet, Ph.; Glaum, J.; Knapp, M.; Eckert, J.; Hoffman, M.

    2014-09-01

    Many ferroelectric devices are based on doped lead zirconate titanate (PZT) ceramics with compositions near the morphotropic phase boundary (MPB), at which the relevant material's properties approach their maximum. Based on a synchrotron x-ray diffraction study of MPB PZT, bulk fatigue is unambiguously found to arise from a less effective field induced tetragonal-to-monoclinic transformation, at which the degradation of the polarization flipping is detected by a less intense and more diffuse anomaly in the atomic displacement parameter of lead. The time dependence of the ferroelectric response on a structural level down to 250 μs confirms this interpretation in the time scale of the piezolectric strain response.

  14. Maximum Correntropy Criterion Kalman Filter for α-Jerk Tracking Model with Non-Gaussian Noise

    Directory of Open Access Journals (Sweden)

    Bowen Hou

    2017-11-01

    Full Text Available As one of the most critical issues for target track, α -jerk model is an effective maneuver target track model. Non-Gaussian noises always exist in the track process, which usually lead to inconsistency and divergence of the track filter. A novel Kalman filter is derived and applied on α -jerk tracking model to handle non-Gaussian noise. The weighted least square solution is presented and the standard Kalman filter is deduced firstly. A novel Kalman filter with the weighted least square based on the maximum correntropy criterion is deduced. The robustness of the maximum correntropy criterion is also analyzed with the influence function and compared with the Huber-based filter, and, moreover, the kernel size of Gaussian kernel plays an important role in the filter algorithm. A new adaptive kernel method is proposed in this paper to adjust the parameter in real time. Finally, simulation results indicate the validity and the efficiency of the proposed filter. The comparison study shows that the proposed filter can significantly reduce the noise influence for α -jerk model.

  15. Lead diffusion in monazite

    International Nuclear Information System (INIS)

    Gardes, E.

    2006-06-01

    Proper knowledge of the diffusion rates of lead in monazite is necessary to understand the U-Th-Pb age anomalies of this mineral, which is one of the most used in geochronology after zircon. Diffusion experiments were performed in NdPO 4 monocrystals and in Nd 0.66 Ca 0.17 Th 0.17 PO 4 polycrystals from Nd 0.66 Pb 0.17 Th 0.17 PO 4 thin films to investigate Pb 2+ + Th 4+ ↔ 2 Nd 3+ and Pb 2+ ↔ Ca 2+ exchanges. Diffusion annealings were run between 1200 and 1500 Celsius degrees, at room pressure, for durations ranging from one hour to one month. The diffusion profiles were analysed using TEM (transmission electronic microscopy) and RBS (Rutherford backscattering spectroscopy). The diffusivities extracted for Pb 2+ + Th 4+ ↔ 2 Nd 3+ exchange follow an Arrhenius law with parameters E equals 509 ± 24 kJ mol -1 and log(D 0 (m 2 s -1 )) equals -3.41 ± 0.77. Preliminary data for Pb 2+ ↔ Ca 2+ exchange are in agreement with this result. The extrapolation of our data to crustal temperatures yields very slow diffusivities. For instance, the time necessary for a 50 μm grain to lose all of its lead at 800 Celsius degrees is greater than the age of the Earth. From these results and other evidence from the literature, we conclude that most of the perturbations in U-Th-Pb ages of monazite cannot be attributed to lead diffusion, but rather to interactions with fluids. (author)

  16. Fuzzy Controller Design Using FPGA for Photovoltaic Maximum Power Point Tracking

    OpenAIRE

    Basil M Hamed; Mohammed S. El-Moghany

    2012-01-01

    The cell has optimum operating point to be able to get maximum power. To obtain Maximum Power from photovoltaic array, photovoltaic power system usually requires Maximum Power Point Tracking (MPPT) controller. This paper provides a small power photovoltaic control system based on fuzzy control with FPGA technology design and implementation for MPPT. The system composed of photovoltaic module, buck converter and the fuzzy logic controller implemented on FPGA for controlling on/off time of MOSF...

  17. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  18. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  19. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  20. Maximum Range of a Projectile Thrown from Constant-Speed Circular Motion

    Science.gov (United States)

    Poljak, Nikola

    2016-01-01

    The problem of determining the angle ? at which a point mass launched from ground level with a given speed v[subscript 0] will reach a maximum distance is a standard exercise in mechanics. There are many possible ways of solving this problem, leading to the well-known answer of ? = p/4, producing a maximum range of D[subscript max] = v[superscript…

  1. Archives of Atmospheric Lead Pollution

    Science.gov (United States)

    Weiss, Dominik; Shotyk, William; Kempf, Oliver

    Environmental archives such as peat bogs, sediments, corals, trees, polar ice, plant material from herbarium collections, and human tissue material have greatly helped to assess both ancient and recent atmospheric lead deposition and its sources on a regional and global scale. In Europe detectable atmospheric lead pollution began as early as 6000years ago due to enhanced soil dust and agricultural activities, as studies of peat bogs reveal. Increased lead emissions during ancient Greek and Roman times have been recorded and identified in many long-term archives such as lake sediments in Sweden, ice cores in Greenland, and peat bogs in Spain, Switzerland, the United Kingdom, and the Netherlands. For the period since the Industrial Revolution, other archives such as corals, trees, and herbarium collections provide similar chronologies of atmospheric lead pollution, with periods of enhanced lead deposition occurring at the turn of the century and since 1950. The main sources have been industry, including coal burning, ferrous and nonferrous smelting, and open waste incineration until c.1950 and leaded gasoline use since 1950. The greatest lead emissions to the atmosphere all over Europe occurred between 1950 and 1980 due to traffic exhaust. A marked drop in atmospheric lead fluxes found in most archives since the 1980s has been attributed to the phasing out of leaded gasoline. The isotope ratios of lead in the various archives show qualitatively similar temporal changes, for example, the immediate response to the introduction and phasing out of leaded gasoline. Isotope studies largely confirm source assessments based on lead emission inventories and allow the contributions of various anthropogenic sources to be calculated.

  2. Força muscular respiratória, postura corporal, intensidade vocal e tempos máximos de fonação na Doença de Parkinson Respiratory muscle strength, body posture, vocal intensity and maximum phonation times in Parkinson Disease

    Directory of Open Access Journals (Sweden)

    Fernanda Vargas Ferreira

    2012-04-01

    Full Text Available TEMA: Verificar os achados de força muscular respiratória (FMR, postura corporal (PC, intensidade vocal (IV e tempos máximos de fonação (TMF, em indivíduos com Doença de Parkinson (DP e casos de controle, conforme o sexo, o estágio da DP e o nível de atividade física (AF. PROCEDIMENTOS: três homens e duas mulheres com DP, entre 36 e 63 anos (casos de estudo - CE, e cinco indivíduos sem doenças neurológicas, pareados em idade, sexo e nível de AF (casos de controle - CC. Avaliadas a FMR, PC, IV e TMF. RESULTADOS: homens: diminuição mais acentuada dos TMF, IV e FMR nos parkinsonianos, mais alterações posturais nos idosos; mulheres com e sem DP: alterações posturais similares, relação positiva entre estágio, nível de AF e as demais medidas. CONCLUSÕES: Verificou-se nas parkinsonianas, prejuízo na IV e nos parkinsonianos déficits nos TMF, IV e FMR. Sugerem-se novos estudos sob um viés interdisciplinar.PURPOSE: To check the findings on respiratory muscular strength (RMS, body posture (BP, vocal intensity (VI and maximum phonation time (MPT, in patients with Parkinson Disease (PD and control cases, according to gender, Parkinson Disease stage (PD and the level of physical activity (PA. METHODS: three men and two women with PD, between 36 and 63 year old (study cases - SC, and five subjects without neurologic diseases, of the same age, gender and PA level (control cases - CC. We evaluated RMS, BP, VI and MPT. RESULTS: men: a more pronounced decrease of MPT, VI, RMS in Parkinson patients, plus postural alterations in the elderly; women: similar postural alterations, positive relation between stages, PA level and the other measures. CONCLUSIONS: We observed in women with PD, impaired VI; in men with PD deficits in MPT, VI, RMS. We suggest further studies under an interdisciplinary bias.

  3. Study of forecasting maximum demand of electric power

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, B.C.; Hwang, Y.J. [Korea Energy Economics Institute, Euiwang (Korea, Republic of)

    1997-08-01

    As far as the past performances of power supply and demand in Korea is concerned, one of the striking phenomena is that there have been repeated periodic surpluses and shortages of power generation facilities. Precise assumption and prediction of power demands is the basic work in establishing a supply plan and carrying out the right policy since facilities investment of the power generation industry requires a tremendous amount of capital and a long construction period. The purpose of this study is to study a model for the inference and prediction of a more precise maximum demand under these backgrounds. The non-parametric model considered in this study, paying attention to meteorological factors such as temperature and humidity, does not have a simple proportionate relationship with the maximum power demand, but affects it through mutual complicated nonlinear interaction. I used the non-parametric inference technique by introducing meteorological effects without importing any literal assumption on the interaction of temperature and humidity preliminarily. According to the analysis result, it is found that the non-parametric model that introduces the number of tropical nights which shows the continuity of the meteorological effect has better prediction power than the linear model. The non- parametric model that considers both the number of tropical nights and the number of cooling days at the same time is a model for predicting maximum demand. 7 refs., 6 figs., 9 tabs.

  4. Cases in which ancestral maximum likelihood will be confusingly misleading.

    Science.gov (United States)

    Handelman, Tomer; Chor, Benny

    2017-05-07

    Ancestral maximum likelihood (AML) is a phylogenetic tree reconstruction criteria that "lies between" maximum parsimony (MP) and maximum likelihood (ML). ML has long been known to be statistically consistent. On the other hand, Felsenstein (1978) showed that MP is statistically inconsistent, and even positively misleading: There are cases where the parsimony criteria, applied to data generated according to one tree topology, will be optimized on a different tree topology. The question of weather AML is statistically consistent or not has been open for a long time. Mossel et al. (2009) have shown that AML can "shrink" short tree edges, resulting in a star tree with no internal resolution, which yields a better AML score than the original (resolved) model. This result implies that AML is statistically inconsistent, but not that it is positively misleading, because the star tree is compatible with any other topology. We show that AML is confusingly misleading: For some simple, four taxa (resolved) tree, the ancestral likelihood optimization criteria is maximized on an incorrect (resolved) tree topology, as well as on a star tree (both with specific edge lengths), while the tree with the original, correct topology, has strictly lower ancestral likelihood. Interestingly, the two short edges in the incorrect, resolved tree topology are of length zero, and are not adjacent, so this resolved tree is in fact a simple path. While for MP, the underlying phenomenon can be described as long edge attraction, it turns out that here we have long edge repulsion. Copyright © 2017. Published by Elsevier Ltd.

  5. Maximum Recoverable Gas from Hydrate Bearing Sediments by Depressurization

    KAUST Repository

    Terzariol, Marco

    2017-11-13

    The estimation of gas production rates from hydrate bearing sediments requires complex numerical simulations. This manuscript presents a set of simple and robust analytical solutions to estimate the maximum depressurization-driven recoverable gas. These limiting-equilibrium solutions are established when the dissociation front reaches steady state conditions and ceases to expand further. Analytical solutions show the relevance of (1) relative permeabilities between the hydrate free sediment, the hydrate bearing sediment, and the aquitard layers, and (2) the extent of depressurization in terms of the fluid pressures at the well, at the phase boundary, and in the far field. Close form solutions for the size of the produced zone allow for expeditious financial analyses; results highlight the need for innovative production strategies in order to make hydrate accumulations an economically-viable energy resource. Horizontal directional drilling and multi-wellpoint seafloor dewatering installations may lead to advantageous production strategies in shallow seafloor reservoirs.

  6. LIBOR troubles: Anomalous movements detection based on maximum entropy

    Science.gov (United States)

    Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria

    2016-05-01

    According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.

  7. Radiation pressure acceleration: The factors limiting maximum attainable ion energy

    Energy Technology Data Exchange (ETDEWEB)

    Bulanov, S. S.; Esarey, E.; Schroeder, C. B. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Bulanov, S. V. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); A. M. Prokhorov Institute of General Physics RAS, Moscow 119991 (Russian Federation); Esirkepov, T. Zh.; Kando, M. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); Pegoraro, F. [Physics Department, University of Pisa and Istituto Nazionale di Ottica, CNR, Pisa 56127 (Italy); Leemans, W. P. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Physics Department, University of California, Berkeley, California 94720 (United States)

    2016-05-15

    Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it transparent for radiation and effectively terminating the acceleration. The off-normal incidence of the laser on the target, due either to the experimental setup, or to the deformation of the target, will also lead to establishing a limit on maximum ion energy.

  8. Maximum and minimum entropy states yielding local continuity bounds

    Science.gov (United States)

    Hanson, Eric P.; Datta, Nilanjana

    2018-04-01

    Given an arbitrary quantum state (σ), we obtain an explicit construction of a state ρɛ * ( σ ) [respectively, ρ * , ɛ ( σ ) ] which has the maximum (respectively, minimum) entropy among all states which lie in a specified neighborhood (ɛ-ball) of σ. Computing the entropy of these states leads to a local strengthening of the continuity bound of the von Neumann entropy, i.e., the Audenaert-Fannes inequality. Our bound is local in the sense that it depends on the spectrum of σ. The states ρɛ * ( σ ) and ρ * , ɛ (σ) depend only on the geometry of the ɛ-ball and are in fact optimizers for a larger class of entropies. These include the Rényi entropy and the minimum- and maximum-entropies, providing explicit formulas for certain smoothed quantities. This allows us to obtain local continuity bounds for these quantities as well. In obtaining this bound, we first derive a more general result which may be of independent interest, namely, a necessary and sufficient condition under which a state maximizes a concave and Gâteaux-differentiable function in an ɛ-ball around a given state σ. Examples of such a function include the von Neumann entropy and the conditional entropy of bipartite states. Our proofs employ tools from the theory of convex optimization under non-differentiable constraints, in particular Fermat's rule, and majorization theory.

  9. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  10. Multiple Maximum Exposure Rates in Computerized Adaptive Testing

    Science.gov (United States)

    Ramon Barrada, Juan; Veldkamp, Bernard P.; Olea, Julio

    2009-01-01

    Computerized adaptive testing is subject to security problems, as the item bank content remains operative over long periods and administration time is flexible for examinees. Spreading the content of a part of the item bank could lead to an overestimation of the examinees' trait level. The most common way of reducing this risk is to impose a…

  11. Basic Information about Lead in Drinking Water

    Science.gov (United States)

    ... this page is not intended to catalog all possible health effects for lead. Rather, it is intended to let ... in drinking water at which no adverse health effects are likely to occur with ... on possible health risks, are called maximum contaminant level goals ( ...

  12. Lead inclusions in aluminium

    International Nuclear Information System (INIS)

    Johnson, E.; Johansen, A.; Sarholt-Kristensen, L.; Andersen, H.H.; Grabaek, L.; Bohr, J.

    1990-01-01

    Ion implantation at room temperature of lead into aluminum leads to spontaneous phase separation and formation of lead precipitates growing topotactically with the matrix. Unlike the highly pressurized (∼ 1-5 GPa) solid inclusions formed after noble gas implantations, the pressure in the lead precipitates is found to be less than 0.12 GPa. Recently the authors have observed the result that the lead inclusions in aluminum exhibit both superheating and supercooling. In this paper they review and elaborate on these results. Small implantation-induced lead precipitates embedded in an aluminum matrix were studied by x-ray diffraction

  13. Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution

    Directory of Open Access Journals (Sweden)

    Hare Krishna

    2017-01-01

    Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.

  14. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    International Nuclear Information System (INIS)

    Bilsky, A V; Lozhkin, V A; Markovich, D M; Tokarev, M P

    2013-01-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART. (paper)

  15. 20 CFR 10.806 - How are the maximum fees defined?

    Science.gov (United States)

    2010-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees.../Current Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time...

  16. Texture and anisotropy in ferroelectric lead metaniobate

    Science.gov (United States)

    Iverson, Benjamin John

    Ferroelectric lead metaniobate, PbNb2O6, is a piezoelectric ceramic typically used because of its elevated Curie temperature and anisotropic properties. However, the piezoelectric constant, d33, is relatively low in randomly oriented ceramics when compared to other ferroelectrics. Crystallographic texturing is often employed to increase the piezoelectric constant because the spontaneous polarization axes of grains are better aligned. In this research, crystallographic textures induced through tape casting are distinguished from textures induced through electrical poling. Texture is described using multiple quantitative approaches utilizing X-ray and neutron time-of-flight diffraction. Tape casting lead metaniobate with an inclusion of acicular template particles induces an orthotropic texture distribution. Templated grain growth from seed particles oriented during casting results in anisotropic grain structures. The degree of preferred orientation is directly linked to the shear behavior of the tape cast slurry. Increases in template concentration, slurry viscosity, and casting velocity lead to larger textures by inducing more particle orientation in the tape casting plane. The maximum 010 texture distributions were two and a half multiples of a random distribution. Ferroelectric texture was induced by electrical poling. Electric poling increases the volume of material oriented with the spontaneous polarization direction in the material. Samples with an initial paraelectric texture exhibit a greater change in the domain volume fraction during electrical poling than randomly oriented ceramics. In tape cast samples, the resulting piezoelectric response is proportional to the 010 texture present prior to poling. This results in property anisotropy dependent on initial texture. Piezoelectric properties measured on the most textured ceramics were similar to those obtained with a commercial standard.

  17. Maximum mass of magnetic white dwarfs

    International Nuclear Information System (INIS)

    Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez

    2015-01-01

    We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)

  18. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  19. Maximum Margin Clustering of Hyperspectral Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  20. Paving the road to maximum productivity.

    Science.gov (United States)

    Holland, C

    1998-01-01

    "Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.