1982-05-01
AD-A128 318 LEAD TIME STUDY (U) ARMY ARMAMENT RESEARCH AND DEVELOPMENT CDMMAND DOVER NJ SYSTEMS ANALYSIS DIV dI-T~~ CHU MAY 82 ARRAA 82- 3/ /l N...EhhEEE--E 1111.0 U 1 - I 1120 1.25I1,,-. 11.6 MICROCOPY RESOLUTION TESI CHARI NATIONAL BUREAU 01 STANDARDt 19t,3 A co LEAD TIME STUDY c*A JULIE CHU MAY...188 D I.-f . . .... .. - r - .. " ’- -~ L - - _ _ __ ARRAA 82-3 LEAD TIME STUDY Prepared by:_ JL CHU Reviewed by:Li t’ ( LAWRENCE J. QWUNI Chief, Sys
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Time series analysis by the Maximum Entropy method
Kirk, B.L.; Rust, B.W.; Van Winkle, W.
1979-01-01
The principal subject of this report is the use of the Maximum Entropy method for spectral analysis of time series. The classical Fourier method is also discussed, mainly as a standard for comparison with the Maximum Entropy method. Examples are given which clearly demonstrate the superiority of the latter method over the former when the time series is short. The report also includes a chapter outlining the theory of the method, a discussion of the effects of noise in the data, a chapter on significance tests, a discussion of the problem of choosing the prediction filter length, and, most importantly, a description of a package of FORTRAN subroutines for making the various calculations. Cross-referenced program listings are given in the appendices. The report also includes a chapter demonstrating the use of the programs by means of an example. Real time series like the lynx data and sunspot numbers are also analyzed. 22 figures, 21 tables, 53 references.
Training Concept, Evolution Time, and the Maximum Entropy Production Principle
Alexey Bezryadin
2016-04-01
Full Text Available The maximum entropy production principle (MEPP is a type of entropy optimization which demands that complex non-equilibrium systems should organize such that the rate of the entropy production is maximized. Our take on this principle is that to prove or disprove the validity of the MEPP and to test the scope of its applicability, it is necessary to conduct experiments in which the entropy produced per unit time is measured with a high precision. Thus we study electric-field-induced self-assembly in suspensions of carbon nanotubes and realize precise measurements of the entropy production rate (EPR. As a strong voltage is applied the suspended nanotubes merge together into a conducting cloud which produces Joule heat and, correspondingly, produces entropy. We introduce two types of EPR, which have qualitatively different significance: global EPR (g-EPR and the entropy production rate of the dissipative cloud itself (DC-EPR. The following results are obtained: (1 As the system reaches the maximum of the DC-EPR, it becomes stable because the applied voltage acts as a stabilizing thermodynamic potential; (2 We discover metastable states characterized by high, near-maximum values of the DC-EPR. Under certain conditions, such efficient entropy-producing regimes can only be achieved if the system is allowed to initially evolve under mildly non-equilibrium conditions, namely at a reduced voltage; (3 Without such a “training” period the system typically is not able to reach the allowed maximum of the DC-EPR if the bias is high; (4 We observe that the DC-EPR maximum is achieved within a time, Te, the evolution time, which scales as a power-law function of the applied voltage; (5 Finally, we present a clear example in which the g-EPR theoretical maximum can never be achieved. Yet, under a wide range of conditions, the system can self-organize and achieve a dissipative regime in which the DC-EPR equals its theoretical maximum.
Efficient maximum likelihood parameterization of continuous-time Markov processes
McGibbon, Robert T
2015-01-01
Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce an maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is drastically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations.
Minimizing Maximum Response Time and Delay Factor in Broadcast Scheduling
Chekuri, Chandra; Moseley, Benjamin
2009-01-01
We consider online algorithms for pull-based broadcast scheduling. In this setting there are n pages of information at a server and requests for pages arrive online. When the server serves (broadcasts) a page p, all outstanding requests for that page are satisfied. We study two related metrics, namely maximum response time (waiting time) and maximum delay-factor and their weighted versions. We obtain the following results in the worst-case online competitive model. - We show that FIFO (first-in first-out) is 2-competitive even when the page sizes are different. Previously this was known only for unit-sized pages [10] via a delicate argument. Our proof differs from [10] and is perhaps more intuitive. - We give an online algorithm for maximum delay-factor that is O(1/eps^2)-competitive with (1+\\eps)-speed for unit-sized pages and with (2+\\eps)-speed for different sized pages. This improves on the algorithm in [12] which required (2+\\eps)-speed and (4+\\eps)-speed respectively. In addition we show that the algori...
Secondary Item Procurement Lead Time Study.
1984-03-01
NONEhhhhhh % I - 15 III& 1-- -NAION&°I° oI Ai OF .° NAIU REUONF TEST CANA -~~ 7 .. SECONDARY ITEM PROCUREMENT LEAD TIME STUDY __ LOGISTICS SYSTEMS...ASSISTANT SECRETARY OF THE AIR FORCE (RD&L) DIRECTOR, DEFENSE LOGISTICS AGENCY SUBJECT: Secondary Item Procurement Lead Time Study A recent report by the...determination of procurement lead time. A plan for the study is enclosed. In order to achieve the objectives of the procurement lead time study as well as the
Production Planning with Load Dependent Lead Times
Pahl, Julia
2005-01-01
Lead times impact the performance of the supply chain significantly. Although there is a large literature concerning queuing models for the analysis of the relationship between capacity utilization and lead times, and there is a substantial literature concerning control and order release policies...... that take lead times into consideration, there have been only few papers describing models at the aggregate planning level that recognize the relationship between the planned utilization of capacity and lead times. In this paper we provide an in-depth discussion of the state-of-the art in this literature...
Load Dependent Lead Times and Sustainability
Pahl, Julia; Voss, Stefan
2016-01-01
to prevent decreased quality or waste of production parts and products. This gains importance because waiting times imply longer lead times charging the production system with work in process inventories. Longer lead times can lead to quality losses due to depreciation, so that parts need to be reworked...... if possible or discarded. But return flows of products for rework or remanufacturing actions significantly complicate the production planning process. We analyze sustainability options with respect to lead time management by formulating a comprehensive mathematical model. We consider a deterministic, mixed...
Improving predictability of time series using maximum entropy methods
Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.
2015-04-01
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
Time-Reversal Acoustics and Maximum-Entropy Imaging
Berryman, J G
2001-08-22
Target location is a common problem in acoustical imaging using either passive or active data inversion. Time-reversal methods in acoustics have the important characteristic that they provide a means of determining the eigenfunctions and eigenvalues of the scattering operator for either of these problems. Each eigenfunction may often be approximately associated with an individual scatterer. The resulting decoupling of the scattered field from a collection of targets is a very useful aid to localizing the targets, and suggests a number of imaging and localization algorithms. Two of these are linear subspace methods and maximum-entropy imaging.
Lead time TTO: leading to better health state valuations?
Attema, Arthur E; Versteegh, Matthijs M; Oppe, Mark; Brouwer, Werner B F; Stolk, Elly A
2013-04-01
Preference elicitation tasks for better than dead (BTD) and worse than dead (WTD) health states vary in the conventional time trade-off (TTO) procedure, casting doubt on uniformity of scale. 'Lead time TTO' (LT-TTO) was recently introduced to overcome the problem. We tested different specifications of LT-TTO in comparison with TTO in a within-subject design. We elicited preferences for six health states and employed an intertemporal ranking task as a benchmark to test the validity of the two methods. We also tested constant proportional trade-offs (CPTO), while correcting for discounting, and the effect of extending the lead time if a health state is considered substantially WTD. LT-TTO produced lower values for BTD states and higher values for WTD states. The validity of CPTO varied across tasks, but it was higher for LT-TTO than for TTO. Results indicate that the ratio of lead time to disease time has a greater impact on results than the total duration of the time frame. The intertemporal ranking task could not discriminate between TTO and LT-TTO.
On the maximum-entropy/autoregressive modeling of time series
Chao, B. F.
1984-01-01
The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.
MARSpline model for lead seven-day maximum and minimum air temperature prediction in Chennai, India
K Ramesh; R Anitha
2014-06-01
In this study, a Multivariate Adaptive Regression Spline (MARS) based lead seven days minimum and maximum surface air temperature prediction system is modelled for station Chennai, India. To emphasize the effectiveness of the proposed system, comparison is made with the models created using statistical learning technique Support Vector Machine Regression (SVMr). The analysis highlights that prediction accuracy of MARS models for minimum temperature forecast are promising for short-term forecast (lead days 1 to 3) with mean absolute error (MAE) less than 1°C and the prediction efficiency and skill degrades in medium term forecast (lead days 4 to 7) with slightly above 1°C. The MAE of maximum temperature is little higher than minimum temperature forecast varying from 0.87°C for day-one to 1.27°C for lag day-seven with MARS approach. The statistical error analysis emphasizes that MARS models perform well with an average 0.2°C of reduction in MAE over SVMr models for all ahead seven days and provide significant guidance for the prediction of temperature event. The study also suggests that the correlation between the atmospheric parameters used as predictors and the temperature event decreases as the lag increases with both approaches.
D; Ashall; B; Parkinson
2002-01-01
Background: Over recent years many manufacturing or ga nisations have focussed their attention on improving operational effectiveness. Whilst improvements made have increased the level of manufacturing efficiency in many cases these have not been reflected within SME's by their response in me eting customer lead times. Aims: Whilst considerable efficiency gains have been made in the manufacturing e nvironment the improvements achieved have not necessarily been reflected in the attainment of customer d...
Maximum Time Separation of Events in Cyclic Systems with Linear and Latest Timing Constraints
Jin, Fen; Hulgaard, Henrik; Cerny, Eduard
1998-01-01
The determination of the maximum time separations of events is important in the design, synthesis, and verification of digital systems, especially in interface timing verification. Many researchers have explored solutions to the problem with various restrictions: a) on the type of constraints......, and b) on whether the events in the specification are allowed to occur repeatedly. When the events can occur only once, the problem is well solved. There are fewer concrete results for systems where the events can occur repeatedly. We extend the work by Hulgaard et al.\\ for computing the maximum...
Optimal lead time for dengue forecast.
Yien Ling Hii
Full Text Available BACKGROUND: A dengue early warning system aims to prevent a dengue outbreak by providing an accurate prediction of a rise in dengue cases and sufficient time to allow timely decisions and preventive measures to be taken by local authorities. This study seeks to identify the optimal lead time for warning of dengue cases in Singapore given the duration required by a local authority to curb an outbreak. METHODOLOGY AND FINDINGS: We developed a Poisson regression model to analyze relative risks of dengue cases as functions of weekly mean temperature and cumulative rainfall with lag times of 1-5 months using spline functions. We examined the duration of vector control and cluster management in dengue clusters > = 10 cases from 2000 to 2010 and used the information as an indicative window of the time required to mitigate an outbreak. Finally, we assessed the gap between forecast and successful control to determine the optimal timing for issuing an early warning in the study area. Our findings show that increasing weekly mean temperature and cumulative rainfall precede risks of increasing dengue cases by 4-20 and 8-20 weeks, respectively. These lag times provided a forecast window of 1-5 months based on the observed weather data. Based on previous vector control operations, the time needed to curb dengue outbreaks ranged from 1-3 months with a median duration of 2 months. Thus, a dengue early warning forecast given 3 months ahead of the onset of a probable epidemic would give local authorities sufficient time to mitigate an outbreak. CONCLUSIONS: Optimal timing of a dengue forecast increases the functional value of an early warning system and enhances cost-effectiveness of vector control operations in response to forecasted risks. We emphasize the importance of considering the forecast-mitigation gaps in respective study areas when developing a dengue forecasting model.
Improving Order Lead Time: A Case Study
Villarreal, Bernardo; Salido, Lucy
2009-01-01
A fundamental challenge of globally competing companies is to increase their level of customer satisfaction, by devising and implementing strategies aimed at providing better price, quality, and service. This paper describes the efforts of a Mexican company to achieve this goal, and in particular, with the need to decrease order lead time…
Maximum holding endurance time: Effects of load and load's center of gravity height.
Lee, Tzu-Hsien
2015-01-01
Manual holding task is a potential risk to the development of musculoskeletal injuries since it is prone to induce localized muscle fatigue. Maximum holding endurance time is a significant parameter for the design of manual holding task. This study aimed to examine the effects of load and load's COG height on maximum holding endurance time. Fifteen young and healthy males were recruited as participants. A factorial design was used to examine the effects of load and load's COG height on maximum holding endurance time. Four levels of load (15% , 30% , 45% and 60% of the participant's maximum holding capacity) and two levels of load's COG height in box (0 cm and 40 cm high from the handle position) were examined. Maximum holding endurance time decreased with increasing load and/or increasing load's COG height. The effect of load's COG height on maximum holding endurance time decreased with increasing load. Load, load's COG height, and the interaction of load and load's COG height significantly affected maximum holding endurance time. Practitioners should realize the effects of load, load's COG height, and the interaction of load and load's COG height on maximum holding endurance time when setting the working conditions of holding tasks.
Jun He; Xin Yao
2004-01-01
Most of works on the time complexity analysis of evolutionary algorithms have always focused on some artificial binary problems.The time complexity of the algorithms for combinatorial optimisation has not been well understood.This paper considers the time complexity of an evolutionary algorithm for a classical combinatorial optimisation problem,to find the maximum cardinality matching in a graph.It is shown that the evolutionary algorithm can produce a matching with nearly maximum cardinality in average polynomial time.
Schaefer, Andreas; Wenzel, Friedemann
2017-04-01
Subduction zones are generally the sources of the earthquakes with the highest magnitudes. Not only in Japan or Chile, but also in Pakistan, the Solomon Islands or for the Lesser Antilles, subduction zones pose a significant hazard for the people. To understand the behavior of subduction zones, especially to identify their capabilities to produce maximum magnitude earthquakes, various physical models have been developed leading to a large number of various datasets, e.g. from geodesy, geomagnetics, structural geology, etc. There have been various studies to utilize this data for the compilation of a subduction zone parameters database, but mostly concentrating on only the major zones. Here, we compile the largest dataset of subduction zone parameters both in parameter diversity but also in the number of considered subduction zones. In total, more than 70 individual sources have been assessed and the aforementioned parametric data have been combined with seismological data and many more sources have been compiled leading to more than 60 individual parameters. Not all parameters have been resolved for each zone, since the data completeness depends on the data availability and quality for each source. In addition, the 3D down-dip geometry of a majority of the subduction zones has been resolved using historical earthquake hypocenter data and centroid moment tensors where available and additionally compared and verified with results from previous studies. With such a database, a statistical study has been undertaken to identify not only correlations between those parameters to estimate a parametric driven way to identify potentials for maximum possible magnitudes, but also to identify similarities between the sources themselves. This identification of similarities leads to a classification system for subduction zones. Here, it could be expected if two sources share enough common characteristics, other characteristics of interest may be similar as well. This concept
Modeling stochastic lead times in multi-echelon systems
Diks, E.B.; van der Heijden, Matthijs C.
1997-01-01
In many multi-echelon inventory systems, the lead times are random variables. A common and reasonable assumption in most models is that replenishment orders do not cross, which implies that successive lead times are correlated. However, the process that generates such lead times is usually not well
Random Supply, Costant Lead Times and Quadratic Backorder Costs. For Inventory Model (M, T)
Dr. Martin Omorodion
2014-01-01
This paper considers the inventory costs for the (M.T) model in which the backorder costs is quadratic, supply is continuous and lead time is constant. Use is made of series 1: Inventory Model (M.T) with quadratic costs and continuous lead times. The inventory cost for fixed M (maximum re-order level) and constant lead times is averaged over the states of M. Supply is assumed to follow a gamma distribution. The inventory costs for the model, random supply, constants lead time and quadratic co...
Zhou, Si-Da; Heylen, Ward; Sas, Paul; Liu, Li
2014-05-01
This paper investigates the problem of modal parameter estimation of time-varying structures under unknown excitation. A time-frequency-domain maximum likelihood estimator of modal parameters for linear time-varying structures is presented by adapting the frequency-domain maximum likelihood estimator to the time-frequency domain. The proposed estimator is parametric, that is, the linear time-varying structures are represented by a time-dependent common-denominator model. To adapt the existing frequency-domain estimator for time-invariant structures to the time-frequency methods for time-varying cases, an orthogonal polynomial and z-domain mapping hybrid basis function is presented, which has the advantageous numerical condition and with which it is convenient to calculate the modal parameters. A series of numerical examples have evaluated and illustrated the performance of the proposed maximum likelihood estimator, and a group of laboratory experiments has further validated the proposed estimator.
Stochastic behavior of a cold standby system with maximum repair time
Ashish Kumar
2015-09-01
Full Text Available The main aim of the present paper is to analyze the stochastic behavior of a cold standby system with concept of preventive maintenance, priority and maximum repair time. For this purpose, a stochastic model is developed in which initially one unit is operative and other is kept as cold standby. There is a single server who visits the system immediately as and when required. The server takes the unit under preventive maintenance after a maximum operation time at normal mode if one standby unit is available for operation. If the repair of the failed unit is not possible up to a maximum repair time, failed unit is replaced by new one. The failure time, maximum operation time and maximum repair time distributions of the unit are considered as exponentially distributed while repair and maintenance time distributions are considered as arbitrary. All random variables are statistically independent and repairs are perfect. Various measures of system effectiveness are obtained by using the technique of semi-Markov process and RPT. To highlight the importance of the study numerical results are also obtained for MTSF, availability and profit function.
A Maximum Time Difference Pipelined Arithmetic Unit Based on CMOS Gate Array
唐志敏; 夏培肃
1995-01-01
This paper describes a maximum time difference pipelined arithmetic chip,the 36-bit adder and subtractor based on 1.5μm CMOS gate array.The chip can operate at 60MHz,and consumes less than 0.5Watt.The results are also studied,and a more precise model of delay time difference is proposed.
Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems
Hakan A. Çırpan
2002-05-01
Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.
Estimating the Size and Timing of the Maximum Amplitude of Solar Cycle 24
Ke-Jun Li; Peng-Xin Gao; Tong-Wei Su
2005-01-01
A simple statistical method is used to estimate the size and timing of maximum amplitude of the next solar cycle (cycle 24). Presuming cycle 23 to be a short cycle (as is more likely), the minimum of cycle 24 should occur about December 2006 (±2 months) and the maximum, around March 2011 (±9 months),and the amplitude is 189.9 ± 15.5, if it is a fast riser, or about 136, if it is a slow riser. If we presume cycle 23 to be a long cycle (as is less likely), the minimum of cycle 24 should occur about June 2008 (±2 months) and the maximum, about February 2013 (±8 months) and the maximum will be about 137 or 80, according as the cycle is a fast riser or a slow riser.
Maximum Likelihood Estimation of Time-Varying Loadings in High-Dimensional Factor Models
Mikkelsen, Jakob Guldbæk; Hillebrand, Eric; Urga, Giovanni
In this paper, we develop a maximum likelihood estimator of time-varying loadings in high-dimensional factor models. We specify the loadings to evolve as stationary vector autoregressions (VAR) and show that consistent estimates of the loadings parameters can be obtained by a two-step maximum...... likelihood estimation procedure. In the first step, principal components are extracted from the data to form factor estimates. In the second step, the parameters of the loadings VARs are estimated as a set of univariate regression models with time-varying coefficients. We document the finite...
Evaluating the time limit at maximum aerobic speed in elite swimmers. Training implications.
Renoux, J C
2001-12-01
The aim of the present study was to make use of the concepts of maximum aerobic speed (MAS) and time limit (tlim) in order to determine the relationship between these two elements, and this in an attempt to significantly improve both speed and swimming performance during a training season. To this same end, an intermittent training model was used, which was adapted to the value obtained for the time limit at maximum aerobic speed. During a 12 week training period, the maximum aerobic speed for a group of 9 top-ranking varsity swimmers was measured on two occasions, as was the tlim. The values generated indicated that: 1) there was an inverse relationship between MAS and the time this speed could be maintained, thus confirming the studies by Billat et al. (1994b); 2) a significant increase in MAS occurred over the 12 week period, although no such evolution was seen for the tlim; 3) there was an improvement in results; 4) the time limit could be used in designing a training program based on intermittent exercises. In addition, results of the present study should allow swimming coaches to draw up individualized training programs for a given swimmer by taking into consideration maximum aerobic speed, time limit and propelling efficiency.
Kirkegaard, Poul Henning; Nielsen, Søren R.K.; Micaletti, R. C.;
This paper considers estimation of the Maximum Damage Indicator (MSDI) by using time-frequency system identification techniques for an RC-structure subjected to earthquake excitation. The MSDI relates the global damage state of the RC-structure to the relative decrease of the fundamental eigenfre...
Recommended maximum holding times for prevention of discomfort of static standing postures
Miedema, M.C.; Douwes, M.; Dul, J.
1997-01-01
The aim of the present study was threefold; (1) to analyze the influence of posture on the maximum holding time (MHT), (2) to study the possibility of classifying postures on the basis of MHT, and (3) to develop ergonomic recommendations for the MHT of categories of postures. For these purposes data
Efficiency at maximum power output of quantum heat engines under finite-time operation
Wang, Jianhui; He, Jizhou; Wu, Zhaoqi
2012-03-01
We study the efficiency at maximum power, ηm, of irreversible quantum Carnot engines (QCEs) that perform finite-time cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For QCEs in the reversible limit (long cycle period, zero dissipation), ηm becomes identical to the Carnot efficiency ηC=1-Tc/Th. For QCE cycles in which nonadiabatic dissipation and the time spent on two adiabats are included, the efficiency ηm at maximum power output is bounded from above by ηC/(2-ηC) and from below by ηC/2. In the case of symmetric dissipation, the Curzon-Ahlborn efficiency ηCA=1-Tc/Th is recovered under the condition that the time allocation between the adiabats and the contact time with the reservoir satisfy a certain relation.
Efficiency at maximum power output of quantum heat engines under finite-time operation.
Wang, Jianhui; He, Jizhou; Wu, Zhaoqi
2012-03-01
We study the efficiency at maximum power, η(m), of irreversible quantum Carnot engines (QCEs) that perform finite-time cycles between a hot and a cold reservoir at temperatures T(h) and T(c), respectively. For QCEs in the reversible limit (long cycle period, zero dissipation), η(m) becomes identical to the Carnot efficiency η(C)=1-T(c)/T(h). For QCE cycles in which nonadiabatic dissipation and the time spent on two adiabats are included, the efficiency η(m) at maximum power output is bounded from above by η(C)/(2-η(C)) and from below by η(C)/2. In the case of symmetric dissipation, the Curzon-Ahlborn efficiency η(CA)=1-√(T(c)/T(h)) is recovered under the condition that the time allocation between the adiabats and the contact time with the reservoir satisfy a certain relation.
PROCESS INNOVATION: HOLISTIC SCENARIOS TO REDUCE TOTAL LEAD TIME
Alin POSTEUCĂ
2015-11-01
Full Text Available The globalization of markets requires continuous development of business holistic scenarios to ensure acceptable flexibility to satisfy customers. Continuous improvement of supply chain supposes continuous improvement of materials and products lead time and flow, material stocks and finished products stocks and increasing the number of suppliers close by as possible. The contribution of our study is to present holistic scenarios of total lead time improvement and innovation by implementing supply chain policy.
Impact maturity times and citation time windows: The 2-year maximum journal impact factor
Dorta-Gonzalez, Pablo
2013-01-01
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIF) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behaviour across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are...
Becker, Joseph F.; Valentin, Jose
1996-01-01
The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks †
Wang, Zhaowei; Zeng, Peng; Zhou, Mingtuo; Li, Dong; Wang, Jintao
2017-01-01
Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs), and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs’ demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS) method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays. PMID:28098750
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks
Zhaowei Wang
2017-01-01
Full Text Available Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs, and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs’ demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays.
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks.
Wang, Zhaowei; Zeng, Peng; Zhou, Mingtuo; Li, Dong; Wang, Jintao
2017-01-13
Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs), and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs' demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS) method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays.
Maximum-Likelihood Detection for Energy-Efficient Timing Acquisition in NB-IoT
2016-01-01
Initial timing acquisition in narrow-band IoT (NB-IoT) devices is done by detecting a periodically transmitted known sequence. The detection has to be done at lowest possible latency, because the RF-transceiver, which dominates downlink power consumption of an NB-IoT modem, has to be turned on throughout this time. Auto-correlation detectors show low computational complexity from a signal processing point of view at the price of a higher detection latency. In contrast a maximum likelihood cro...
Prediction of maximum magnitude and original time of reservoir induced seismicity
无
2001-01-01
This paper deals with the prediction of potentially maximum magnitude and origin time for reservoir induced seismicity (RIS). The factor and sign of seismology and geology of RIS has been studied, and the information quantity for magnitude of induced seismicity provided by them has been calculated. In terms of information quan-tity the biggest possible magnitude of RIS is determined. The changes of seismic frequency with time are studied using grey model method, and the time of the biggest change rate is taken as original time of the main shock. The feasibility of methods for predicting magnitude and time has been tested for the reservoir induced seismicity in the Xinfengjiang reservoir, China and the Koyna reservoir, India.
A real-time maximum-likelihood heart-rate estimator for wearable textile sensors.
Cheng, Mu-Huo; Chen, Li-Chung; Hung, Ying-Che; Yang, Chang Ming
2008-01-01
This paper presents a real-time maximum-likelihood heart-rate estimator for ECG data measured via wearable textile sensors. The ECG signals measured from wearable dry electrodes are notorious for its susceptibility to interference from the respiration or the motion of wearing person such that the signal quality may degrade dramatically. To overcome these obstacles, in the proposed heart-rate estimator we first employ the subspace approach to remove the wandering baseline, then use a simple nonlinear absolute operation to reduce the high-frequency noise contamination, and finally apply the maximum likelihood estimation technique for estimating the interval of R-R peaks. A parameter derived from the byproduct of maximum likelihood estimation is also proposed as an indicator for signal quality. To achieve the goal of real-time, we develop a simple adaptive algorithm from the numerical power method to realize the subspace filter and apply the fast-Fourier transform (FFT) technique for realization of the correlation technique such that the whole estimator can be implemented in an FPGA system. Experiments are performed to demonstrate the viability of the proposed system.
STATIONARITY OF ANNUAL MAXIMUM DAILY STREAMFLOW TIME SERIES IN SOUTH-EAST BRAZILIAN RIVERS
Jorge Machado Damázio
2015-08-01
Full Text Available DOI: 10.12957/cadest.2014.18302The paper presents a statistical analysis of annual maxima daily streamflow between 1931 and 2013 in South-East Brazil focused in detecting and modelling non-stationarity aspects. Flood protection for the large valleys in South-East Brazil is provided by multiple purpose reservoir systems built during 20th century, which design and operation plans has been done assuming stationarity of historical flood time series. Land cover changes and rapidly-increasing level of atmosphere greenhouse gases of the last century may be affecting flood regimes in these valleys so that it can be that nonstationary modelling should be applied to re-asses dam safety and flood control operation rules at the existent reservoir system. Six annual maximum daily streamflow time series are analysed. The time series were plotted together with fitted smooth loess functions and non-parametric statistical tests are performed to check the significance of apparent trends shown by the plots. Non-stationarity is modelled by fitting univariate extreme value distribution functions which location varies linearly with time. Stationarity and non-stationarity modelling are compared with the likelihood ratio statistic. In four of the six analyzed time series non-stationarity modelling outperformed stationarity modelling.Keywords: Stationarity; Extreme Value Distributions; Flood Frequency Analysis; Maximum Likelihood Method.
The Research of Car-Following Model Based on Real-Time Maximum Deceleration
Longhai Yang
2015-01-01
Full Text Available This paper is concerned with the effect of real-time maximum deceleration in car-following. The real-time maximum acceleration is estimated with vehicle dynamics. It is known that an intelligent driver model (IDM can control adaptive cruise control (ACC well. The disadvantages of IDM at high and constant speed are analyzed. A new car-following model which is applied to ACC is established accordingly to modify the desired minimum gap and structure of the IDM. We simulated the new car-following model and IDM under two different kinds of road conditions. In the first, the vehicles drive on a single road, taking dry asphalt road as the example in this paper. In the second, vehicles drive onto a different road, and this paper analyzed the situation in which vehicles drive from a dry asphalt road onto an icy road. From the simulation, we found that the new car-following model can not only ensure driving security and comfort but also control the steady driving of the vehicle with a smaller time headway than IDM.
Fiebig, H R
2002-01-01
We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss practical issues of the approach.
Maximum number of habitable planets at the time of Earth's origin: new hints for panspermia?
von Bloh, Werner; Franck, Siegfried; Bounama, Christine; Schellnhuber, Hans-Joachim
2003-04-01
New discoveries have fuelled the ongoing discussion of panspermia, i.e. the transport of life from one planet to another within the solar system (interplanetary panspermia) or even between different planetary systems (interstellar panspermia). The main factor for the probability of interstellar panspermia is the average density of stellar systems containing habitable planets. The combination of recent results for the formation rate of Earth-like planets with our estimations of extrasolar habitable zones allows us to determine the number of habitable planets in the Milky Way over cosmological time scales. We find that there was a maximum number of habitable planets around the time of Earth's origin. If at all, interstellar panspermia was most probable at that time and may have kick-started life on our planet.
Real-time maximum a-posteriori image reconstruction for fluorescence microscopy
Anwar A. Jabbar
2015-08-01
Full Text Available Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU based real-time maximum a-posteriori (MAP image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA efficiently execute the iterative image reconstruction algorithm that is ≈200-fold faster (for large dataset when compared to existing CPU based systems.
Real-time maximum a-posteriori image reconstruction for fluorescence microscopy
Jabbar, Anwar A.; Dilipkumar, Shilpa; C K, Rasmi; Rajan, K.; Mondal, Partha P.
2015-08-01
Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU) based real-time maximum a-posteriori (MAP) image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads) results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA) efficiently execute the iterative image reconstruction algorithm that is ≈200-fold faster (for large dataset) when compared to existing CPU based systems.
Onset of effects of testosterone treatment and time span until maximum effects are achieved
Saad, Farid; Aversa, Antonio; Isidori, Andrea M; Zafalon, Livia; Zitzmann, Michael; Gooren, Louis
2011-01-01
Objective Testosterone has a spectrum of effects on the male organism. This review attempts to determine, from published studies, the time-course of the effects induced by testosterone replacement therapy from their first manifestation until maximum effects are attained. Design Literature data on testosterone replacement. Results Effects on sexual interest appear after 3 weeks plateauing at 6 weeks, with no further increments expected beyond. Changes in erections/ejaculations may require up to 6 months. Effects on quality of life manifest within 3–4 weeks, but maximum benefits take longer. Effects on depressive mood become detectable after 3–6 weeks with a maximum after 18–30 weeks. Effects on erythropoiesis are evident at 3 months, peaking at 9–12 months. Prostate-specific antigen and volume rise, marginally, plateauing at 12 months; further increase should be related to aging rather than therapy. Effects on lipids appear after 4 weeks, maximal after 6–12 months. Insulin sensitivity may improve within few days, but effects on glycemic control become evident only after 3–12 months. Changes in fat mass, lean body mass, and muscle strength occur within 12–16 weeks, stabilize at 6–12 months, but can marginally continue over years. Effects on inflammation occur within 3–12 weeks. Effects on bone are detectable already after 6 months while continuing at least for 3 years. Conclusion The time-course of the spectrum of effects of testosterone shows considerable variation, probably related to pharmacodynamics of the testosterone preparation. Genomic and non-genomic effects, androgen receptor polymorphism and intracellular steroid metabolism further contribute to such diversity. PMID:21753068
Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition
Wang, H.
2017-05-26
The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.
Abhishek Khanna
2012-01-01
Full Text Available We revisit the problem of optimal power extraction in four-step cycles (two adiabatic and two heat-transfer branches when the finite-rate heat transfer obeys a linear law and the heat reservoirs have finite heat capacities. The heat-transfer branch follows a polytropic process in which the heat capacity of the working fluid stays constant. For the case of ideal gas as working fluid and a given switching time, it is shown that maximum work is obtained at Curzon-Ahlborn efficiency. Our expressions clearly show the dependence on the relative magnitudes of heat capacities of the fluid and the reservoirs. Many previous formulae, including infinite reservoirs, infinite-time cycles, and Carnot-like and non-Carnot-like cycles, are recovered as special cases of our model.
Component Prioritization Schema for Achieving Maximum Time and Cost Benefits from Software Testing
Srivastava, Praveen Ranjan; Pareek, Deepak
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Defining the end of software testing represents crucial features of any software development project. A premature release will involve risks like undetected bugs, cost of fixing faults later, and discontented customers. Any software organization would want to achieve maximum possible benefits from software testing with minimum resources. Testing time and cost need to be optimized for achieving a competitive edge in the market. In this paper, we propose a schema, called the Component Prioritization Schema (CPS), to achieve an effective and uniform prioritization of the software components. This schema serves as an extension to the Non Homogenous Poisson Process based Cumulative Priority Model. We also introduce an approach for handling time-intensive versus cost-intensive projects.
Maximum Likelihood Timing and Carrier Synchronization in Burst-Mode Satellite Transmissions
Morelli Michele
2007-01-01
Full Text Available This paper investigates the joint maximum likelihood (ML estimation of the carrier frequency offset, timing error, and carrier phase in burst-mode satellite transmissions over an AWGN channel. The synchronization process is assisted by a training sequence appended in front of each burst and composed of alternating binary symbols. The use of this particular pilot pattern results into an estimation algorithm of affordable complexity that operates in a decoupled fashion. In particular, the frequency offset is measured first and independently of the other parameters. Timing and phase estimates are subsequently computed through simple closed-form expressions. The performance of the proposed scheme is investigated by computer simulation and compared with Cramer-Rao bounds. It turns out that the estimation accuracy is very close to the theoretical limits up to relatively low signal-to-noise ratios. This makes the algorithm well suited for turbo-coded transmissions operating near the Shannon limit.
Maximum Likelihood Timing and Carrier Synchronization in Burst-Mode Satellite Transmissions
Michele Morelli
2007-06-01
Full Text Available This paper investigates the joint maximum likelihood (ML estimation of the carrier frequency offset, timing error, and carrier phase in burst-mode satellite transmissions over an AWGN channel. The synchronization process is assisted by a training sequence appended in front of each burst and composed of alternating binary symbols. The use of this particular pilot pattern results into an estimation algorithm of affordable complexity that operates in a decoupled fashion. In particular, the frequency offset is measured first and independently of the other parameters. Timing and phase estimates are subsequently computed through simple closed-form expressions. The performance of the proposed scheme is investigated by computer simulation and compared with Cramer-Rao bounds. It turns out that the estimation accuracy is very close to the theoretical limits up to relatively low signal-to-noise ratios. This makes the algorithm well suited for turbo-coded transmissions operating near the Shannon limit.
Mlpnp - a Real-Time Maximum Likelihood Solution to the Perspective-N Problem
Urban, S.; Leitloff, J.; Hinz, S.
2016-06-01
In this paper, a statistically optimal solution to the Perspective-n-Point (PnP) problem is presented. Many solutions to the PnP problem are geometrically optimal, but do not consider the uncertainties of the observations. In addition, it would be desirable to have an internal estimation of the accuracy of the estimated rotation and translation parameters of the camera pose. Thus, we propose a novel maximum likelihood solution to the PnP problem, that incorporates image observation uncertainties and remains real-time capable at the same time. Further, the presented method is general, as is works with 3D direction vectors instead of 2D image points and is thus able to cope with arbitrary central camera models. This is achieved by projecting (and thus reducing) the covariance matrices of the observations to the corresponding vector tangent space.
JIN Qibing; LIU Qie; WANG Qi; TIAN Yuqi; WANG Yuanfei
2013-01-01
The IMC (Internal Model Control) controller based on robust tuning can improve the robustness and dynamic performance of the system.In this paper,the robustness degree of the control system is investigated based on Maximum Sensitivity (Ms) in depth.And the analytical relationship is obtained between the robustness specification and controller parameters,which gives a clear design criterion to robust IMC controller.Moreover,a novel and simple IMC-PID (Proportional-Integral-Derivative) tuning method is proposed by converting the IMC controller to PID form in terms of the time domain rather than the frequency domain adopted in some conventional IMC-based methods.Hence,the presented IMC-PID gives a good performance with a specific robustness degree.The new IMC-PID method is compared with other classical IMC-PID rules,showing the flexibility and feasibility for a wide range of plants.
Liu, Jian; Miller, William H.
2008-08-01
The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. The LSC-IVR provides a very effective 'prior' for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25K and 14K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR, for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T = 25K, but the MEAC procedure produces a significant correction at the lower temperature (T = 14K). Comparisons are also made to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.
Glottal closure instant and voice source analysis using time-scale lines of maximum amplitude
Christophe D’Alessandro; Nicolas Sturmel
2011-10-01
1Time-scale representation of voiced speech is applied to voice quality analysis, by introducing the Line of Maximum Amplitude (LoMA) method. This representation takes advantage of the tree patterns observed for voiced speech periods in the time-scale domain. For each period, the optimal LoMA is computed by linking amplitude maxima at each scale of a wavelet transform, using a dynamic programming algorithm. A time-scale analysis of the linear acoustic model of speech production shows several interesting properties. The LoMA points to the glottal closure instants. The LoMA phase delay is linked to the voice open quotient. The cumulated amplitude along the LoMA is related to voicing amplitude. The LoMA spectral centre of gravity is an indication of voice spectral tilt. Following these theoretical considerations, experimental results are reported. Comparative evaluation demonstrates that the LoMA is an effective method for the detection of Glottal Closure Instants (GCI). The effectiveness of LoMA analysis for open quotient, amplitude and spectral tilt estimations is also discussed with the help of some examples.
Time-optimal excitation of maximum quantum coherence: Physical limits and pulse sequences
Köcher, S. S.; Heydenreich, T.; Zhang, Y.; Reddy, G. N. M.; Caldarelli, S.; Yuan, H.; Glaser, S. J.
2016-04-01
Here we study the optimum efficiency of the excitation of maximum quantum (MaxQ) coherence using analytical and numerical methods based on optimal control theory. The theoretical limit of the achievable MaxQ amplitude and the minimum time to achieve this limit are explored for a set of model systems consisting of up to five coupled spins. In addition to arbitrary pulse shapes, two simple pulse sequence families of practical interest are considered in the optimizations. Compared to conventional approaches, substantial gains were found both in terms of the achieved MaxQ amplitude and in pulse sequence durations. For a model system, theoretically predicted gains of a factor of three compared to the conventional pulse sequence were experimentally demonstrated. Motivated by the numerical results, also two novel analytical transfer schemes were found: Compared to conventional approaches based on non-selective pulses and delays, double-quantum coherence in two-spin systems can be created twice as fast using isotropic mixing and hard spin-selective pulses. Also it is proved that in a chain of three weakly coupled spins with the same coupling constants, triple-quantum coherence can be created in a time-optimal fashion using so-called geodesic pulses.
Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun
2014-01-01
We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-02-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
The timing of the maximum extent of the Rhone Glacier at Wangen a.d. Aare
Ivy-Ochs, S.; Schluechter, C. [Bern Univ. (Switzerland); Kubik, P.W. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Beer, J. [EAWAG, Duebendorf (Switzerland)
1997-09-01
Erratic blocks found in the region of Wangen a.d. Aare delineate the maximum position of the Solothurn lobe of the Rhone Glacier. {sup 10}Be and {sup 26}Al exposure ages of three of these blocks show that the glacier withdraw from its maximum position at or slightly before 20,000{+-}1800 years ago. (author) 1 fig., 5 refs.
Jat, Prahlad; Serre, Marc L
2016-12-01
Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R(2) by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles.
Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications
Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz
2016-12-01
This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledge of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.
Guo-Jheng Yang
2013-08-01
Full Text Available The fragile watermarking technique is used to protect intellectual property rights while also providing security and rigorous protection. In order to protect the copyright of the creators, it can be implanted in some representative text or totem. Because all of the media on the Internet are digital, protection has become a critical issue, and determining how to use digital watermarks to protect digital media is thus the topic of our research. This paper uses the Logistic map with parameter u = 4 to generate chaotic dynamic behavior with the maximum entropy 1. This approach increases the security and rigor of the protection. The main research target of information hiding is determining how to hide confidential data so that the naked eye cannot see the difference. Next, we introduce one method of information hiding. Generally speaking, if the image only goes through Arnold’s cat map and the Logistic map, it seems to lack sufficient security. Therefore, our emphasis is on controlling Arnold’s cat map and the initial value of the chaos system to undergo small changes and generate different chaos sequences. Thus, the current time is used to not only make encryption more stringent but also to enhance the security of the digital media.
Ertas, Gokhan; Gulcur, H Ozcan; Tunaci, Mehtap
2008-05-01
Effectiveness of morphological descriptors based on normalized maximum intensity-time ratio (nMITR) maps generated using a 3 x 3 pixel moving mask on dynamic contrast-enhanced magnetoresistance (MR) mammograms are studied for assessment of malignancy. After a rough indication of volume of interest on the nMITR maps, lesions are automatically segmented. Two-dimensional (2D) convexity, normalized complexity, extent, and eccentricity as well as three-dimensional (3D) versions of these descriptors and contact surface area ratio are computed. On a data set consisting of dynamic contrast-enhanced MR DCE-MR mammograms from 51 women that contain 26 benign and 32 malignant lesions, 3D convexity, complexity, and extent are found to reflect aggressiveness of malignancy better than 2D descriptors. Contact surface area ratio which is easily adaptable to different imaging resolutions is found to be the most significant and accurate descriptor (75% sensitivity, 88% specificity, 89% positive predictive values, and 74% negative predictive values).
The Maximum Coping Time Analysis of the ELAP for the OPR1400
Shin, Sung Hyun; Hah, Chang Joo [KINGS, Ulsan (Korea, Republic of); Jung, Si Chae; Lee, Chang Gyun [KEPCO E and C, Daejeon (Korea, Republic of)
2014-05-15
There have been many evaluations and recommendations for the extended Station Black Out (SBO) condition of the nuclear power plant. For example, the 'SECY-11-0093/0137', is a recommendation of NRC and the 'WCAP-17601-P' is an evaluation of the PWROG. The extended loss of AC power (ELAP) can be defined as same with the extended (or prolonged) SBO which has a Loss of Offsite Power (LOOP) condition and loss of all Emergency Diesel Generators (EDG), Alternative Alternating Current (AAC), but Direct Current (DC) source is available. This evaluation provides NSSS responses to an ELAP for the OPR1000 unit. And the results presented provide certain phenomena which occur during the ELAP, the maximum coping time until a core uncovery condition. It is assumed for this case that sufficient SG secondary makeup inventory exists or can be attained, so that the duration of the ELAP prior to core damage is dependent solely upon the loss of inventory from the RCS. Even with a limited RCS cooldown and depressurization, and conservatively high assumed RCP seal leakage, the plant can be sustained for over 65 hours prior to core uncovery.
Johann A. Briffa
2014-06-01
Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.
Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code
Adel Ahmadi
2015-01-01
Full Text Available Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML detection approach for quasi-orthogonal space-time block code (QOSTBC. The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric into independent positive parts and a positive interference part. Search spaces of symbols are substantially reduced by employing the independent parts and statistics of noise. Symbols within the search spaces are successively evaluated until the metric is minimized. Simulation results confirm that the proposed decoder’s performance is superior to many of the recently published state-of-the-art solutions in terms of complexity level. More specifically, it was possible to verify that application of the new algorithms with 1024-QAM would decrease the computational complexity compared to state-of-the-art solution with 16-QAM.
Susanne Wegener
Full Text Available After recanalization, cerebral blood flow (CBF can increase above baseline in cerebral ischemia. However, the significance of post-ischemic hyperperfusion for tissue recovery remains unclear. To analyze the course of post-ischemic hyperperfusion and its impact on vascular function, we used magnetic resonance imaging (MRI with pulsed arterial spin labeling (pASL and measured CBF quantitatively during and after a 60 minute transient middle cerebral artery occlusion (MCAO in adult rats. We added a 5% CO2 - challenge to analyze vasoreactivity in the same animals. Results from MRI were compared to histological correlates of angiogenesis. We found that CBF in the ischemic area recovered within one day and reached values significantly above contralateral thereafter. The extent of hyperperfusion changed over time, which was related to final infarct size: early (day 1 maximal hyperperfusion was associated with smaller lesions, whereas a later (day 4 maximum indicated large lesions. Furthermore, after initial vasoparalysis within the ischemic area, vasoreactivity on day 14 was above baseline in a fraction of animals, along with a higher density of blood vessels in the ischemic border zone. These data provide further evidence that late post-ischemic hyperperfusion is a sequel of ischemic damage in regions that are likely to undergo infarction. However, it is transient and its resolution coincides with re-gaining of vascular structure and function.
Effects of preload 4 repetition maximum on 100-m sprint times in collegiate women.
Linder, Elizabeth E; Prins, Jan H; Murata, Nathan M; Derenne, Coop; Morgan, Charles F; Solomon, John R
2010-05-01
The purpose of this study was to determine the effects of postactivation potentiation (PAP) on track-sprint performance after a preload set of 4 repetition maximum (4RM) parallel back half-squat exercises in collegiate women. All subjects (n = 12) participated in 2 testing sessions over a 3-week period. During the first testing session, subjects performed the Controlled protocol consisting of a 4-minute standardized warm-up, followed by a 4-minute active rest, a 100-m track sprint, a second 4-minute active rest, finalized with a second 100-m sprint. The second testing session, the Treatment protocol, consisted of a 4-minute standardized warm-up, followed by 4-minute active rest, sprint, a second 4-minute active rest, a warm-up of 4RM parallel back half-squat, a third 9-minute active rest, finalized with a second sprint. The results indicated that there was a significant improvement of 0.19 seconds (p sprint was preceded by a 4RM back-squat protocol during Treatment. The standardized effect size, d, was 0.82, indicating a large effect size. Additionally, the results indicated that it would be expected that mean sprint times would increase 0.04-0.34 seconds (p 0.05). The findings suggest that performing a 4RM parallel back half-squat warm-up before a track sprint will have a positive PAP affect on decreased track-sprint times. Track coaches, looking for the "competitive edge" (PAP effect) may re-warm up their sprinters during meets.
Jiang Zhu
2014-01-01
Full Text Available Some delta-nabla type maximum principles for second-order dynamic equations on time scales are proved. By using these maximum principles, the uniqueness theorems of the solutions, the approximation theorems of the solutions, the existence theorem, and construction techniques of the lower and upper solutions for second-order linear and nonlinear initial value problems and boundary value problems on time scales are proved, the oscillation of second-order mixed delat-nabla differential equations is discussed and, some maximum principles for second order mixed forward and backward difference dynamic system are proved.
FlowMax: A Computational Tool for Maximum Likelihood Deconvolution of CFSE Time Courses.
Maxim Nikolaievich Shokhirev
Full Text Available The immune response is a concerted dynamic multi-cellular process. Upon infection, the dynamics of lymphocyte populations are an aggregate of molecular processes that determine the activation, division, and longevity of individual cells. The timing of these single-cell processes is remarkably widely distributed with some cells undergoing their third division while others undergo their first. High cell-to-cell variability and technical noise pose challenges for interpreting popular dye-dilution experiments objectively. It remains an unresolved challenge to avoid under- or over-interpretation of such data when phenotyping gene-targeted mouse models or patient samples. Here we develop and characterize a computational methodology to parameterize a cell population model in the context of noisy dye-dilution data. To enable objective interpretation of model fits, our method estimates fit sensitivity and redundancy by stochastically sampling the solution landscape, calculating parameter sensitivities, and clustering to determine the maximum-likelihood solution ranges. Our methodology accounts for both technical and biological variability by using a cell fluorescence model as an adaptor during population model fitting, resulting in improved fit accuracy without the need for ad hoc objective functions. We have incorporated our methodology into an integrated phenotyping tool, FlowMax, and used it to analyze B cells from two NFκB knockout mice with distinct phenotypes; we not only confirm previously published findings at a fraction of the expended effort and cost, but reveal a novel phenotype of nfkb1/p105/50 in limiting the proliferative capacity of B cells following B-cell receptor stimulation. In addition to complementing experimental work, FlowMax is suitable for high throughput analysis of dye dilution studies within clinical and pharmacological screens with objective and quantitative conclusions.
... found? Who is at risk? What are the health effects of lead? Get educational material about lead Get certified as a Lead Abatement Worker, or other abatement discipline Lead in drinking water Lead air pollution Test your child Check and maintain your home ...
Becker, L. W. M.; Sejrup, H. P.; Hjelstuen, B. O. B.; Haflidason, H.
2016-12-01
The extent of the NW European ice sheet during the Last Glacial Maximum is fairly well constrained to, at least in periods, the shelf edge. However, the exact timing and varying activity of the largest ice stream, the Norwegian Channel Ice Stream (NCIS), remains uncertain. We here present three sediment records, recovered proximal and distal to the upper NW European continental slope. All age models for the cores are constructed in the same way and based solely on 14C dating of planktonic foraminifera. The sand-sized sediments in the discussed cores is believed to be primarily transported by ice rafting. All records suggest ice streaming activity between 25.8 and 18.5 ka BP. However, the core proximal to the mouth of the Norwegian Channel (NC) shows distinct periods of activity and periods of very little coarse sediment input. Out of this there appear to be at least three well-defined periods of ice streaming activity which lasted each for 1.5 to 2 ka, with "pauses" of several hundred years in between. The same core shows a conspicuous variation in several proxies and sediment colour within the first peak of ice stream activity, compared to the second and third peak. The light grey colour of the sediment was earlier attributed to Triassic chalk grains, yet all "chalk" grains are in fact mollusc fragments. The low magnetic susceptibility values, the high Ca, high Sr and low Fe content compared to the other peaks suggests a different provenance for the material of the first peak. We suggest therefore, that the origin of this material is rather the British Irish Ice Sheet (BIIS) and not the Fennoscandian Ice Sheet (FIS). Earlier studies have shown an extent of the BIIS at least to the NC, whereas ice from the FIS likely stayed within the boundaries of the NC. A possible scenario for the different provenance could therefore be the build-up of the BIIS into the NC until it merged with the FIS. At this point the BIIS calved off the shelf edge southwest of the mouth of
Wu, Feilong; He, Jizhou; Ma, Yongli; Wang, Jianhui
2014-12-01
We consider the efficiency at maximum power of a quantum Otto engine, which uses a spin or a harmonic system as its working substance and works between two heat reservoirs at constant temperatures Th and Tc (Otto engine working in the linear-response regime.
Kozlowski, Dawid; Worthington, Dave
2015-01-01
Many public healthcare systems struggle with excessive waiting lists for elective patient treatment. Different countries address this problem in different ways, and one interesting method entails a maximum waiting time guarantee. Introduced in Denmark in 2002, it entitles patients to treatment at...... by hospital planners and strategic decision makers....... at a private hospital in Denmark or at a hospital abroad if the public healthcare system is unable to provide treatment within the stated maximum waiting time guarantee. Although clearly very attractive in some respects, many stakeholders have been very concerned about the negative consequences of the policy...... on the utilization of public hospital resources. This paper illustrates the use of a queue modelling approach in the analysis of elective patient treatment governed by the maximum waiting time policy. Drawing upon the combined strengths of analytic and simulation approaches we develop both continuous-time Markov...
MIN Htwe, Y. M.
2016-12-01
Myanmar has suffered many times from earthquake disasters and four times from tsunamis according to historical data. The purpose of this study is to estimate the tsunami arrival time and maximum tsunami wave amplitude for the Rakhine coast of Myanmar using the TUNAMI F1 model. In this study I calculate the tsunami arrival time and maximum tsunami wave amplitude based on a tsunamigenic earthquake source of moment magnitude 8.5 in the Arakan subduction zone off the west-coast of Myanmar, using the TUNAMI F1 model, selecting eight points on Rakhine coast. The model result indicates that the tsunami waves would first hit Kyaukpyu on the Rakhine coast about 0.05 minutes after the onset of a magnitude 8.5 earthquake, and the maximum tsunami wave amplitude would be 2.37 meters.
Nezhel'skaya, L. A.
2016-09-01
A flow of physical events (photons, electrons, and other elementary particles) is studied. One of the mathematical models of such flows is the modulated MAP flow of events circulating under conditions of unextendable dead time period. It is assumed that the dead time period is an unknown fixed value. The problem of estimation of the dead time period from observations of arrival times of events is solved by the method of maximum likelihood.
Shen, Hua
2016-10-19
A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.
Lead-Time Models Should Not Be Used to Estimate Overdiagnosis in Cancer Screening
Zahl, Per-Henrik; Jørgensen, Karsten Juhl; Gøtzsche, Peter C
2014-01-01
Lead-time can mean two different things: Clinical lead-time is the lead-time for clinically relevant tumors; that is, those that are not overdiagnosed. Model-based lead-time is a theoretical construct where the time when the tumor would have caused symptoms is not limited by the person's death. I...
Henning Grosse Ruse-Khan
2009-07-01
Full Text Available International intellectual property (IP protection is at the heart of controversies over the impact of economic interests on social or environmental concerns. Some see IP rights as unduly encroaching upon human rights and societal interests, others argue for stronger enforcement and additional exclusivity to incentivize new innovations and creations. Underlying these debates is the perception that international IP treaties set out minimum standards of protection - which presumably allow for additional protection with only the sky being the limit. This article challenges this view and explores the idea of maximum standards or ceilings within the existing body of international IP law. It looks at the relation between IP treaties and subsequent agreements or national laws which offer stronger protection. In particular, within the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS, an important qualification may serve as a door opener for ceilings: While additional IP protection may not go beyond mandatory limits within TRIPS, the qualification not to “contravene” TRIPS is unlikely to safeguard TRIPS flexibilities against TRIPS-plus norms. The article further identifies and examines the rationales for maximum standards in international IP protection as: (1 Legal security and predictability about the boundaries of protection; (2 the global protection of users’ rights; and (3 the free movement of goods, services and information. Examples of mandatory limits in the existing IP treaties and in ongoing initiatives can implement these. However, most of the relevant treaty norms are optional. The article concludes with some observations on the need for more comprehensive and precise maximum standards.
Wu, Feilong; He, Jizhou; Ma, Yongli; Wang, Jianhui
2014-12-01
We consider the efficiency at maximum power of a quantum Otto engine, which uses a spin or a harmonic system as its working substance and works between two heat reservoirs at constant temperatures T(h) and T(c) (power based on these two different kinds of quantum systems are bounded from the upper side by the same expression η(mp)≤η(+)≡η(C)(2)/[η(C)-(1-η(C))ln(1-η(C))] with η(C)=1-T(c)/T(h) as the Carnot efficiency. This expression η(mp) possesses the same universality of the CA efficiency η(CA)=1-√(1-η(C)) at small relative temperature difference. Within the context of irreversible thermodynamics, we calculate the Onsager coefficients and show that the value of η(CA) is indeed the upper bound of EMP for an Otto engine working in the linear-response regime.
Oudyn, Frederik W; Lyons, David J; Pringle, M J
2012-01-01
Many scientific laboratories follow, as standard practice, a relatively short maximum holding time (within 7 days) for the analysis of total suspended solids (TSS) in environmental water samples. In this study we have subsampled from bulk water samples stored at ∼4 °C in the dark, then analysed for TSS at time intervals up to 105 days after collection. The nonsignificant differences in TSS results observed over time demonstrates that storage at ∼4 °C in the dark is an effective method of preserving samples for TSS analysis, far past the 7-day standard practice. Extending the maximum holding time will ease the pressure on sample collectors and laboratory staff who until now have had to determine TSS within an impractically short period.
... Chapter 6 Chapter 7 Chapter 8 Chapter 9 Appendix I Appendix II Tables Figures State Programs Alabama Alaska Arizona ... Tool Kit Resources Healthy Homes and Lead Poisoning Prevention Training Center (HHLPPTC) Training Tracks File Formats Help: ...
John Affisco
2008-04-01
Full Text Available We study the impact of the efforts aimed at reducing the lead-time variability in a quality-adjusted stochastic inventory model. We assume that each lot contains a random number of defective units. More specifically, a logarithmic investment function is used that allows investment to be made to reduce lead-time variability. Explicit results for the optimal values of decision variables as well as optimal value of the variance of lead-time are obtained. A series of numerical exercises is presented to demonstrate the use of the models developed in this paper. Initially the lead-time variance reduction model (LTVR is compared to the quality-adjusted model (QA for different values of initial lead-time over uniformly distributed lead-time intervals from one to seven weeks. In all cases where investment is warranted, investment in lead-time reduction results in reduced lot sizes, variances, and total inventory costs. Further, both the reduction in lot-size and lead-time variance increase as the lead-time interval increases. Similar results are obtained when lead-time follows a truncated normal distribution. The impact of proportion of defective items was also examined for the uniform case resulting in the finding that the total inventory related costs of investing in lead-time variance reduction decrease significantly as the proportion defective decreases. Finally, the results of sensitivity analysis relating to proportion defective, interest rate, and setup cost show the lead-time variance reduction model to be quite robust and representative of practice.
The effects of disjunct sampling and averaging time on maximum mean wind speeds
Larsén, Xiaoli Guo; Mann, J.
2006-01-01
Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...... period before being saved. In either case, the extreme wind will be underestimated. This paper investigates the effects of the disjunct sampling interval and the averaging time on the attenuation of the extreme wind estimation by means of a simple theoretical approach as well as measurements...
ANALYTICAL ESTIMATION OF MINIMUM AND MAXIMUM TIME EXPENDITURES OF PASSENGERS AT AN URBAN ROUTE STOP
Gorbachov, P.
2013-01-01
Full Text Available This scientific paper deals with the problem related to the definition of average time spent by passengers while waiting for transport vehicles at urban stops as well as the results of analytical modeling of this value at traffic schedule unknown to the passengers and of two options of the vehicle traffic management on the given route.
The Round-Robin Mock Interview: Maximum Learning in Minimum Time
Marks, Melanie; O'Connor, Abigail H.
2006-01-01
Interview skills is critical to a job seeker's success in obtaining employment. However, learning interview skills takes time. This article offers an activity for providing students with interview practice while sacrificing only a single classroom period. The authors begin by reviewing relevant literature. Then, they outline the process of…
Real time tests for long lead-time forecasting of the magnetic field vectors within CMEs
Savani, Neel; Vourlidas, Angelos; Pulkkinen, Antti; Wold, Alexandra M.
2016-07-01
The direction of magnetic vectors within coronal mass ejections, CMEs, has significant importance for forecasting terrestrial behavior. We have developed a technique to estimate the time-varying magnetic field at Earth for periods within CMEs (Savani et al 2015, 2016). This technique reduces the complex dynamics in order to create a reliable prediction methodology to operate everyday under robust conditions. In this presentation, we focus on the results and skill scores of the forecasting technique calculated from 40 historical CME events from the pre-STEREO mission. Since these results provided substantial improvements in the long lead-time Kp index forecasts, we have now begun testing under real-time conditions. We will also show the preliminary results of our methodology under these real-time conditions within the CCMC hosted at NASA Goddard Space Flight Center.
Do Declining Discount Rates lead to Time Inconsistent Economic Advice?
Hansen, Anders Chr.
2006-01-01
This paper addresses the risk of time inconsistency in economic appraisals related to the use of hyperbolic discounting (declining discount rates) instead of exponential discounting (constant discount rate). Many economists are uneasy about the prospects of potential time inconsistency. The paper...
Lee, C.-H.; Herget, C. J.
1976-01-01
This short paper considers the parameter-identification problem of general discrete-time, nonlinear, multiple input-multiple output dynamic systems with Gaussian white distributed measurement errors. Knowledge of the system parameterization is assumed to be available. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems.
Real Time Corrosion Monitoring in Lead and Lead-Bismuth Systems
James F. Stubbins; Alan Bolind; Ziang Chen
2010-02-25
The objective of this research program is to develop a real-time, in situ corrosion monitoring technique for flowing liquid Pb and eutectic PbBi (LBE) systems in a temperature range of 400 to 650 C. These conditions are relevant to future liquid metal cooled fast reactor operating parameters. THis program was aligned with the Gen IV Reactor initiative to develp technologies to support the design and opertion of a Pb or LBE-cooled fast reactor. The ability to monitor corrosion for protection of structural components is a high priority issue for the safe and prolonged operation of advanced liquid metal fast reactor systems. In those systems, protective oxide layers are intentionally formed and maintained to limit corrosion rates during operation. This program developed a real time, in situ corrosion monitoring tecnique using impedance spectroscopy (IS) technology.
Murray, M P; Baldwin, J M; Gardner, G M; Sepic, S B; Downs, W J
1977-06-01
Isometric torque of the knee flexor and extensor muscles were recorded for 5 seconds at three knee joint positions. The subjects included healthy men in age groups from 20 to 35 and 45 to 65 years of age. The amplitudes and duration of peak torque and the time to peak torque were measured for each contraction. Peak torque was usually maintaned less than 0.1 second and never longer than 0.9 second. At each of the three angles, the mean extensor muscle torque was higher than the mean flexor muscle torque in both age groups, and the mean torque for both muscle group was higher among the younger than among the older man. The highest average torque was recorded at the knee angle of 60 degrees for the extensor muscles and 45 degrees for the flexor muscles, but this was not always a stereotyped response either for a given individual or among individuals.
Dirac's causality leads to time asymmetry
Sato, Y [Department of physics, University of Texas at Austin, Austin, TX78712 (United States); Kan, H [Hakodate National College of Technology, Tokura-cho 14-1, Hakodate, 042-8501 (Japan)], E-mail: satoyosh@physics.utexas.edu, E-mail: kan@physics.utexas.edu
2008-08-15
On the basis of Dirac's causality, we will show that the time evolution is limited to a semigroup. The abstract vector space for states and (yes-or-no) observables are then not the entire Hilbert space but its particular dense subspaces, called Hardy spaces. The Hardy spaces and their functional spaces together make the Hardy rigged Hilbert spaces, which is also called the time-asymmetric boundary condition (TABC). We will illustrate the usage of the TABC with the neutral kaon decay experiment.
Timing A Pulsed Thin Film Pyroelectric Generator For Maximum Power Denisty
Smith, A. N.; Hanrahan, B. M.; Neville, C. J.; Jankowski, N. R.
2016-11-01
Pyroelectric thermal-to-electric energy conversion is accomplished by a cyclic process of thermally-inducing polarization changes in the material under an applied electric field. The pyroelectric MEMS device investigated consisted of a thin film PZT capacitor with platinum bottom and iridium oxide top electrodes. Electric fields between 1-20 kV/cm with a 30% duty cycle and frequencies from 0.1 - 100 Hz were tested with a modulated continuous wave IR laser with a duty cycle of 20% creating temperature swings from 0.15 - 26 °C on the pyroelectric receiver. The net output power of the device was highly sensitive to the phase delay between the laser power and the applied electric field. A thermal model was developed to predict and explain the power loss associated with finite charge and discharge times. Excellent agreement was achieved between the theoretical model and the experiment results for the measured power density versus phase delay. Limitations on the charging and discharging rates result in reduced power and lower efficiency due to a reduced net work per cycle.
Biomechanical events in the time to exhaustion at maximum aerobic speed.
Gazeau, F; Koralsztein, J P; Billat, V
1997-10-01
Recent studies reported good intra-individual reproducibility, but great inter-individual variation in a sample of elite athletes, in time to exhaustion (tlim) at the maximal aerobic speed (MAS: the lowest speed that elicits VO2max in an incremental treadmill test). The purpose of the present study was, on the one hand, to detect modifications of kinematic variables at the end of the tlim of the VO2max test and, on the other hand, to evaluate the possibility that such modifications were factors responsible for the inter-individual variability in tlim. Eleven sub-elite male runners (Age = 24 +/- 6 years; VO2max = 69.2 +/- 6.8 ml kg-1 min-1; MAS = 19.2 +/- 1.45 km h-1; tlim = 301.9 +/- 82.7 s) performed two exercise tests on a treadmill (0% slope): an incremental test to determine VO2max and MAS, and an exhaustive constant velocity test to determine tlim at MAS. Statistically significant modifications were noted in several kinematic variables. The maximal angular velocity of knee during flexion was the only variable that was both modified through the tlim test and influenced the exercise duration. A multiple correlation analysis showed that tlim was predicted by the modifications of four variables (R = 0.995, P < 0.01). These variables are directly or indirectly in relation with the energic cost of running. It was concluded that runners who demonstrated stable running styles were able to run longer during MAS test because of optimal motor efficiency.
How superluminal motion can lead to backward time travel
Nemiroff, Robert J
2015-01-01
It is commonly asserted that superluminal particle motion can enable backward time travel, but little has been written providing details. It is shown here that the simplest example of a "closed loop" event -- a twin paradox scenario where a single spaceship both traveling out and returning back superluminally -- does {\\it not} result in that ship straightforwardly returning to its starting point before it left. However, a more complicated scenario -- one where the superluminal ship first arrives at an intermediate destination moving subluminally -- can result in backwards time travel. This intermediate step might seem physically inconsequential but is shown to break Lorentz-invariance and be oddly tied to the sudden creation of a pair of spacecraft, one of which remains and one of which annihilates with the original spacecraft.
Determination of Preventive Maintenance Lead Time Using Hybrid Analysis
SUN Yong; MA Lin; Joseph Mathew; ZHANG Sheng
2005-01-01
The time for conducting Preventive Maintenance (PM) on an asset is often determined using a predefined alarm limit based on trends of a hazard function. In this paper, the authors propose using both hazard and reliability functions to improve the accuracy of the prediction particularly when the failure characteristic of the asset whole life is modelled using different failure distributions for the different stages of the life of the asset. The proposed method is validated using simulations and case studies.
Leus, G.; Petré, F.; Moonen, M.
2004-01-01
In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI). Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input
Leus, G.; Petré, F.; Moonen, M.
2004-01-01
In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI). Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input multiple-
Continuous review inventory models under time value of money and crashable lead time consideration
Hung Kuo-Chen
2011-01-01
Full Text Available A stock is an asset if it can react to economic and seasonal influences in the management of the current assets. The financial manager must calculate the input of funds to the stock intelligently and the amount of money cycled through stocks, taking into account the time factors in the future. The purpose of this paper is to propose an inventory model considering issues of crash cost and current value. The sensitivity analysis of each parameter, in this research, differs from the traditional approach. We utilize a course of deduction with sound mathematics to develop several lemmas and one theorem to estimate optimal solutions. This study first tries to find the optimal order quantity at all lengths of lead time with components crashed at their minimum duration. Second, a simple method to locate the optimal solution unlike traditional sensitivity analysis is developed. Finally, some numerical examples are given to illustrate all lemmas and the theorem in the solution algorithm.
2017-01-13
Quality Improvement, Inventory Management, Lead Time Reduction and Production Scheduling in High-mix Manufacturing Environments by Sean Daigle B.S...Mechanical Engineering Chairman, Department Committee on Graduate Theses 2 Quality Improvement, Inventory Management, Lead Time Reduction and...material shortage reduction and lead time reduction of system sub-assemblies. Man- ufacturing quality was found to be impacted by material shortages
Do lean practices lead to more time at the bedside?
Brackett, Tiffany; Comer, Linda; Whichello, Ramona
2013-01-01
The aim of this review is to evaluate the application of value-added processes in healthcare, with an emphasis on their effects on bedside nursing. Literature relevant to Lean methodology and inpatient care was reviewed, excluding all research related to other service lines (i.e., surgical services, emergency services, laboratory, radiology, etc.). Increased value is also an important tenet of transforming care at the bedside (TCAB), an initiative launched by the Institute for Healthcare Improvement (IHI) and the Robert Wood Johnson Foundation (RWJF). Therefore, articles concerning TCAB were also included in this review. A systematic study of the literature revealed varied applications of Lean principles in practice, ranging from the implementation of a single tool, to full organizational restructuring. All articles reviewed reported positive results, although the majority lacked strong supporting evidence for claims of improvement. Even though there is some indication that the application of Lean principles to nursing processes is successful in improving specific outcomes, the authors cannot conclude that the implementation of Lean methodology or TCAB greatly influences direct patient care, or increases time spent at the bedside. © 2011 National Association for Healthcare Quality.
Roussel-Dupre, R.; Symbalisty, E.; Fox, C.; and Vanderlinde, O.
2009-08-01
The location of a radiating source can be determined by time-tagging the arrival of the radiated signal at a network of spatially distributed sensors. The accuracy of this approach depends strongly on the particular time-tagging algorithm employed at each of the sensors. If different techniques are used across the network, then the time tags must be referenced to a common fiducial for maximum location accuracy. In this report we derive the time corrections needed to temporally align leading-edge, time-tagging techniques with peak-picking algorithms. We focus on broadband radio frequency (RF) sources, an ionospheric propagation channel, and narrowband receivers, but the final results can be generalized to apply to any source, propagation environment, and sensor. Our analytic results are checked against numerical simulations for a number of representative cases and agree with the specific leading-edge algorithm studied independently by Kim and Eng (1995) and Pongratz (2005 and 2007).
Santos W. N. dos
2003-01-01
Full Text Available The hot wire technique is considered to be an effective and accurate means of determining the thermal conductivity of ceramic materials. However, specifically for materials of high thermal diffusivity, the appropriate time interval to be considered in calculations is a decisive factor for getting accurate and consistent results. In this work, a numerical simulation model is proposed with the aim of determining the minimum and maximum measuring time for the hot wire parallel technique. The temperature profile generated by this model is in excellent agreement with that one experimentally obtained by this technique, where thermal conductivity, thermal diffusivity and specific heat are simultaneously determined from the same experimental temperature transient. Eighteen different specimens of refractory materials and polymers, with thermal diffusivities ranging from 1x10-7 to 70x10-7 m²/s, in shape of rectangular parallelepipeds, and with different dimensions were employed in the experimental programme. An empirical equation relating minimum and maximum measuring times and the thermal diffusivity of the sample is also obtained.
Almog, Assaf
2014-01-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of time series of activity of their fundamental elements (such as stocks or neurons respectively). While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relationships between binary and non-binary properties of financial time series. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to replicate the observed binary/non-binary relations very well, and to mathematically...
Chiuh Cheng Chyu
2012-06-01
Full Text Available This paper studies the unrelated parallel machine scheduling problem with three minimization objectives – makespan, maximum earliness, and maximum tardiness (MET-UPMSP. The last two objectives combined are related to just-in-time (JIT performance of a solution. Three hybrid algorithms are presented to solve the MET-UPMSP: reactive GRASP with path relinking, dual-archived memetic algorithm (DAMA, and SPEA2. In order to improve the solution quality, min-max matching is included in the decoding scheme for each algorithm. An experiment is conducted to evaluate the performance of the three algorithms, using 100 (jobs x 3 (machines and 200 x 5 problem instances with three combinations of two due date factors – tight and range. The numerical results indicate that DAMA performs best and GRASP performs second for most problem instances in three performance metrics: HVR, GD, and Spread. The experimental results also show that incorporating min-max matching into decoding scheme significantly improves the solution quality for the two population-based algorithms. It is worth noting that the solutions produced by DAMA with matching decoding can be used as benchmark to evaluate the performance of other algorithms.
Zhaoyong Mao
2016-01-01
Full Text Available This paper addresses the power generation control system of a new drag-type Vertical Axis Turbine with several retractable blades. The returning blades can be entirely hidden in the drum, and negative torques can then be considerably reduced as the drum shields the blades. Thus, the power efficiency increases. Regarding the control, a Linear Quadratic Tracking (LQT optimal control algorithm for Maximum Power Point Tracking (MPPT is proposed to ensure that the wave energy conversion system can operate highly effectively under fluctuating conditions and that the tracking process accelerates over time. Two-dimensional Computational Fluid Dynamics (CFD simulations are performed to obtain the maximum power points of the turbine’s output. To plot the tip speed ratio curve, the least squares method is employed. The efficacy of the steady and dynamic performance of the control strategy was verified using Matlab/Simulink software. These validation results show that the proposed system can compensate for power fluctuations and is effective in terms of power regulation.
Comparison of Inventory Systems with Service, Positive Lead-Time, Loss, and Retrial of Customers
A. Krishnamoorthy
2007-01-01
Full Text Available We analyze and compare three (s,S inventory systems with positive service time and retrial of customers. In all of these systems, arrivals of customers form a Poisson process and service times are exponentially distributed. When the inventory level depletes to s due to services, an order of replenishment is placed. The lead-time follows an exponential distribution. In model I, an arriving customer, finding the inventory dry or server busy, proceeds to an orbit with probability γ and is lost forever with probability (1−γ. A retrial customer in the orbit, finding the inventory dry or server busy, returns to the orbit with probability δ and is lost forever with probability (1−δ. In addition to the description in model I, we provide a buffer of varying (finite capacity equal to the current inventory level for model II and another having capacity equal to the maximum inventory level S for model III. In models II and III, an arriving customer, finding the buffer full, proceeds to an orbit with probability γ and is lost forever with probability (1−γ. A retrial customer in the orbit, finding the buffer full, returns to the orbit with probability δ and is lost forever with probability (1−δ. In all these models, the interretrial times are exponentially distributed with linear rate. Using matrix-analytic method, we study these inventory models. Some measures of the system performance in the steady state are derived. A suitable cost function is defined for all three cases and analyzed using graphical illustrations.
Almog, Assaf; Garlaschelli, Diego
2014-09-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false What is the maximum collect-on-delivery amount I may demand at the time of delivery? 375.703 Section 375.703 Transportation Other Regulations Relating... amount I may demand at the time of delivery? (a) On a binding estimate, the maximum amount is the exact...
Inventory Model (Q, R) With Period of Grace, Quadratic Backorder Cost and Continuous Lead Time
Dr. Martin Osawaru Omorodion
2015-01-01
The paper considers the simple economic order model where the period of grace is operating, the lead time is continuous and the backorder cost is quadratic. The lead time follows a gamma distribution. The expected backorder cost per cycle is derived and averaged over all the states of the lead time L. Next we obtain the expected on hand inventory. The Lead time is taken as a Normal variate. The expected backorder costs are, derived after which the expected on hand inventory is derived. The...
?Just-in-Time? Battery Charge Depletion Control for PHEVs and E-REVs for Maximum Battery Life
DeVault, Robert C [ORNL
2009-01-01
Conventional methods of vehicle operation for Plug-in Hybrid Vehicles first discharge the battery to a minimum State of Charge (SOC) before switching to charge sustaining operation. This is very demanding on the battery, maximizing the number of trips ending with a depleted battery and maximizing the distance driven on a depleted battery over the vehicle s life. Several methods have been proposed to reduce the number of trips ending with a deeply discharged battery and also eliminate the need for extended driving on a depleted battery. An optimum SOC can be maintained for long battery life before discharging the battery so that the vehicle reaches an electric plug-in destination just as the battery reaches the minimum operating SOC. These Just-in-Time methods provide maximum effective battery life while getting virtually the same electricity from the grid.
Efficient Inventory Optimization of Multi Product, Multiple Suppliers with Lead Time using PSO
Narmadha, S; Sathish, G
2010-01-01
With information revolution, increased globalization and competition, supply chain has become longer and more complicated than ever before. These developments bring supply chain management to the forefront of the managements attention. Inventories are very important in a supply chain. The total investment in inventories is enormous, and the management of inventory is crucial to avoid shortages or delivery delays for the customers and serious drain on a companys financial resources. The supply chain cost increases because of the influence of lead times for supplying the stocks as well as the raw materials. Practically, the lead times will not be same through out all the periods. Maintaining abundant stocks in order to avoid the impact of high lead time increases the holding cost. Similarly, maintaining fewer stocks because of ballpark lead time may lead to shortage of stocks. This also happens in the case of lead time involved in supplying raw materials. A better optimization methodology that utilizes the Part...
Ivy-Ochs, Susan; Braakhekke, Jochem; Monegato, Giovanni; Gianotti, Franco; Forno, Gabriella; Hippe, Kristina; Christl, Marcus; Akçar, Naki; Schluechter, Christian
2017-04-01
The Last Glacial Maximum (LGM) in the Alps saw much of the mountains inundated by ice. Several main accumulation areas comprising local ice caps and plateau icefields fit into a picture of transection glaciers flowing into huge valley glaciers. In the north the valley glaciers covered long distances (hundreds of kilometers) to reach the forelands where they spread out in fan-shaped piedmont lobes tens of kilometers across, e.g. the Rhine glacier. In the south travel distances to the mountain front were often shorter, the pathway steeper. Nevertheless, not all glaciers even reached beyond the front, as the temperatures were notably warmer in the south. For example at Orta the glacier snout remained within the mountains. Where glaciers reached the forelands they stopped abruptly and the moraine amphitheaters were constructed, e.g. at Ivrea and Rivoli-Avigliana. Sets of stacked moraines built-up as glacier advance was directly confined by the older moraines. We may temporally and spatially identify the culmination of the last glacial cycle by pinpointing the outermost moraines that date to the LGM (generally about 26-24 ka). On the other hand, the timing of abandonment of foreland positions is given by ages of the innermost, often lake-bounding, moraines (about 19-18 ka). Between the two, glacier fluctuations left the stadial moraines. In the Linth-Rhine system three stadials have been recognized: Killwangen, Schlieren and Zurich. Nevertheless, already in the Swiss sector correlation of the LGM stadials among the several foreland lobes is not unambiguous. Across the Alps, not only north to south but also west to east, how do the timing and extent of glaciers during the LGM vary? Recent glacier modelling by Seguinot et al. (2017) informs and suggests the possibility of differences in timing for reaching of the maximum extent and for the number of oscillations of individual lobes during the LGM. At present few sites in the Alps have detailed enough geomorphological
Bellili, Faouzi; Meftehi, Rabii; Affes, Sofiene; Stephenne, Alex
2015-01-01
In this paper, we tackle for the first time the problem of maximum likelihood (ML) estimation of the signal-to-noise ratio (SNR) parameter over time-varying single-input multiple-output (SIMO) channels. Both the data-aided (DA) and the non-data-aided (NDA) schemes are investigated. Unlike classical techniques where the channel is assumed to be slowly time-varying and, therefore, considered as constant over the entire observation period, we address the more challenging problem of instantaneous (i.e., short-term or local) SNR estimation over fast time-varying channels. The channel variations are tracked locally using a polynomial-in-time expansion. First, we derive in closed-form expressions the DA ML estimator and its bias. The latter is subsequently subtracted in order to obtain a new unbiased DA estimator whose variance and the corresponding Cram\\'er-Rao lower bound (CRLB) are also derived in closed form. Due to the extreme nonlinearity of the log-likelihood function (LLF) in the NDA case, we resort to the expectation-maximization (EM) technique to iteratively obtain the exact NDA ML SNR estimates within very few iterations. Most remarkably, the new EM-based NDA estimator is applicable to any linearly-modulated signal and provides sufficiently accurate soft estimates (i.e., soft detection) for each of the unknown transmitted symbols. Therefore, hard detection can be easily embedded in the iteration loop in order to improve its performance at low to moderate SNR levels. We show by extensive computer simulations that the new estimators are able to accurately estimate the instantaneous per-antenna SNRs as they coincide with the DA CRLB over a wide range of practical SNRs.
Prashant Jindal
2016-01-01
Full Text Available In the global critical economic scenario, inflation plays a vital role in deciding optimal pricing of goods in any business entity. This article presents two single-vendor single-buyer integrated supply chain inventory models with inflation and time value of money. Shortage is allowed during the lead-time and it is partially backlogged. Lead time is controllable and can be reduced using crashing cost. In the first model, we consider the demand of lead time follows a normal distribution, and in the second model, it is considered distribution-free. For both cases, our objective is to minimize the integrated system cost by simultaneously optimizing the order quantity, safety factor, lead time and number of lots. The discounted cash flow and classical optimization technique are used to derive the optimal solution for both cases. Numerical examples including the sensitivity analysis of system parameters is provided to validate the results of the supply chain models.
Hala A. Fergany
2005-01-01
Full Text Available This study treats the probabilistic safety stock n-items inventory system having varying order cost and zero lead-time subject to two linear constraints. The expected total cost is composed of three components: the average purchase cost; the expected order cost and the expected holding cost. The policy variables in this model are the number of periods Nr* and the optimal maximum inventory level Qmr* and the minimum expected total cost. We can obtain the optimal values of these policy variables by using the geometric programming approach. A special case is deduced and an illustrative numerical example is added.
Vuillemin, Aurele; Ariztegui, Daniel; Leavitt, Peter R.; Bunting, Lynda
2014-05-01
Laguna Potrok Aike is a closed basin located in the southern hemisphere's mid-latitudes (52°S) where paleoenvironmental conditions were recorded as temporal sedimentary sequences resulting from variations in the regional hydrological regime and geology of the catchment. The interpretation of the limnogeological multiproxy record developed during the ICDP-PASADO project allowed the identification of contrasting time windows associated with the fluctuations of Southern Westerly Winds. In the framework of this project, a 100-m-long core was also dedicated to a detailed geomicrobiological study which aimed at a thorough investigation of the lacustrine subsurface biosphere. Indeed, aquatic sediments do not only record past climatic conditions, but also provide a wide range of ecological niches for microbes. In this context, the influence of environmental features upon microbial development and survival remained still unexplored for the deep lacustrine realm. Therefore, we investigated living microbes throughout the sedimentary sequence using in situ ATP assays and DAPI cell count. These results, compiled with pore water analysis, SEM microscopy of authigenic concretions and methane and fatty acid biogeochemistry, provided evidence for a sustained microbial activity in deep sediments and pinpointed the substantial role of microbial processes in modifying initial organic and mineral fractions. Finally, because the genetic material associated with microorganisms can be preserved in sediments over millennia, we extracted environmental DNA from Laguna Potrok Aike sediments and established 16S rRNA bacterial and archaeal clone libraries to better define the use of DNA-based techniques in reconstructing past environments. We focused on two sedimentary horizons both displaying in situ microbial activity, respectively corresponding to the Holocene and Last Glacial Maximum periods. Sequences recovered from the productive Holocene record revealed a microbial community adapted to
A theory of lead-time in probabilistic excitation of L/H transition
Toda, Shinichiro; Itoh, Sanae-I.; Yagi, Masatoshi [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics; Itoh, Kimitaka [National Inst. for Fusion Science, Toki, Gifu (Japan); Fukuyama, Atsushi [Kyoto Univ. (Japan). Dept. of Nuclear Engineering
2000-07-01
A quantity of a lead-time, t{sub lead}, is newly introduced to examine the probabilistic occurrence of the L/H transition. The lead-time is a time period during which a transition is likely to occur. We show that the lead-time has the statistical distribution as a function of the distance from critical parameter, e.g.|n{sub c} - n{sub c0}|when the density is a key parameter for transition. It has the dependence like t{sub lead} {proportional_to}|n{sub c} - n{sub c0}|{sup 2} if the background noise distribution is given as P (n{sub c}) {proportional_to}|n{sub c} - n{sub c0}|{sup -2}. (author)
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-06-16
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain's response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267
Kyungsoo Kim
2016-06-01
Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.
Vanavil, B.; Krishna Chaitanya, K.; Seshagiri Rao, A.
2015-06-01
In this paper, a proportional-integral-derivative controller in series with a lead-lag filter is designed for control of the open-loop unstable processes with time delay based on direct synthesis method. Study of the performance of the designed controllers has been carried out on various unstable processes. Set-point weighting is considered to reduce the undesirable overshoot. The proposed scheme consists of only one tuning parameter, and systematic guidelines are provided for selection of the tuning parameter based on the peak value of the sensitivity function (Ms). Robustness analysis has been carried out based on sensitivity and complementary sensitivity functions. Nominal and robust control performances are achieved with the proposed method and improved closed-loop performances are obtained when compared to the recently reported methods in the literature.
A comparison of alternative variants of the lead and lag time TTO.
Devlin, Nancy; Buckingham, Ken; Shah, Koonal; Tsuchiya, Aki; Tilling, Carl; Wilkinson, Grahame; van Hout, Ben
2013-05-01
'Lead Time' TTO improves upon conventional TTO by providing a uniform method for eliciting positive and negative values. This research investigates (i) the values generated from different combinations of time in poor health and in full health; and the order in which these appear (lead vs. lag); (ii) whether values concur with participants' views about states; (iii) methods for handling extreme preferences. n = 208 participants valued five EQ-5D states, using two of four variants. Combinations of lead time and health state duration were: 10 years and 20 years; 5 years and 1 year; 5 years and 10 years; and a health state duration of 5 years with a lag time of 10 years. Longer lead times capture more preferences, but may involve a framing effect. Lag time results in less non-trading for mild states, and less time being traded for severe states. Negative values broadly agree with participants' stated opinion that the state is worse than dead. The values are sensitive to the ratio of lead time to duration of poor health, and the order in which these appear (lead vs. lag). It is feasible to handle extreme preferences though challenges remain.
LU Ri-Yu; LI Chao-Fan; Se-Hwan YANG; Buwen DONG
2012-01-01
Leading time length is an important issue for modeling seasonal forecasts. In this study, a comparison of the interannual predictability of the Western North Pacific （WNP） summer monsoon between different leading months was performed by using one-, four-, and sevenmonth lead retrospective forecasts （hindcasts） of four coupled models from Ensembles-Based Predictions of Climate Changes and Their Impacts （ENSEMBLES） for the period of 1960 2005. It is found that the WNP summer anomalies, including lower-tropospheric circulation and precipitation anomalies, can be well predicted for all these leading months. The accuracy of the four-month lead prediction is only slightly weaker than that of the one-month lead prediction, although the skill decreases with the increase of leading months.
Lagging/Leading Coupled Continuous Time Random Walks, Renewal Times and their Joint Limits
Straka, Peter
2010-01-01
Subordinating a random walk to a renewal process yields a continuous time random walk (CTRW) model for diffusion, including the possibility of anomalous diffusion. Transition densities of scaling limits of power law CTRWs have been shown to solve fractional Fokker-Planck equations. We consider limits of sequences of CTRWs which arise when both waiting times and jumps are taken from an infinitesimal triangular array. We identify two different limit processes $X_t$ and $Y_t$ when waiting times precede or follow jumps, respectively. In the limiting procedure, we keep track of the renewal times of the CTRWs and hence find two more limit processes. Finally, we calculate the joint law of all four limit processes evaluated at a fixed time $t$.
Rama, Aarti; Kesari, Shreekant; Das, Pradeep; Kumar, Vijay
2017-07-24
Extensive application of routine insecticide i.e., dichlorodiphenyltrichloroethane (DDT) to control Phlebotomus argentipes (Diptera: Psychodidae), the proven vector of visceral leishmaniasis in India, had evoked the problem of resistance/tolerance against DDT, eventually nullifying the DDT dependent strategies to control this vector. Because tolerating an hour-long exposure to DDT is not challenging enough for the resistant P. argentipes, estimating susceptibility by exposing sand flies to insecticide for just an hour becomes a trivial and futile task.Therefore, this bioassay study was carried out to investigate the maximum limit of exposure time to which DDT resistant P. argentipes can endure the effect of DDT for their survival. The mortality rate of laboratory-reared DDT resistant strain P. argentipes exposed to DDT was studied at discriminating time intervals of 60 min and it was concluded that highly resistant sand flies could withstand up to 420 min of exposure to this insecticide. Additionally, the lethal time for female P. argentipes was observed to be higher than for males suggesting that they are highly resistant to DDT's toxicity. Our results support the monitoring of tolerance limit with respect to time and hence points towards an urgent need to change the World Health Organization's protocol for susceptibility identification in resistant P. argentipes.
Improved higher lead time river flow forecasts using sequential neural network with error updating
Prakash Om
2014-03-01
Full Text Available This paper presents a novel framework to use artificial neural network (ANN for accurate forecasting of river flows at higher lead times. The proposed model, termed as sequential ANN (SANN, is based on the heuristic that a mechanism that provides an accurate representation of physical condition of the basin at the time of forecast, in terms of input information to ANNs at higher lead time, helps improve the forecast accuracy. In SANN, a series of ANNs are connected sequentially to extend the lead time of forecast, each of them taking a forecast value from an immediate preceding network as input. The output of each network is modified by adding an expected value of error so that the residual variance of the forecast series is minimized. The applicability of SANN in hydrological forecasting is illustrated through three case examples: a hypothetical time series, daily river flow forecasting of Kentucky River, USA and hourly river flow forecasting of Kolar River, India. The results demonstrate that SANN is capable of providing accurate forecasts up to 8 steps ahead. A very close fit (>94% efficiency was obtained between computed and observed flows up to 1 hour in advance for all the cases, and the deterioration in fit was not significant as the forecast lead time increased (92% at 8 steps ahead. The results show that SANN performs much better than traditional ANN models in extending the forecast lead time, suggesting that it can be effectively employed in developing flood management measures.
Infinitesimal dividing modeling method for dual suppliers inventory model with random lead times
Ji Pengcheng; Song Shiji; Wu Cheng
2009-01-01
As one of the basic inventory cost models, the (Q, r) inventory cost model of dual suppliers with random procurement lead time is mostly formulated by using the concepts of "effective lead time" and "lead time demand", which may lead to an imprecise inventory cost. Through the real-time statistic of the inventory quantities, this paper considers the precise (Q, r) inventory cost model of dual supplier procurement by using an infinitesimal dividing method. The traditional modeling method of the inventory cost for dual supplier procurement includes complex procedures. To reduce the complexity effectively, the presented method investigates the statistics properties in real-time of the inventory quantities with the application of the infinitesimal dividing method. It is proved that the optimal holding and shortage costs of dual supplier procurement are less than those of single supplier procurement respectively. With the assumption that both suppliers have the same distribution of lead times, the convexity of the cost function per unit time is proved. So the optimal solution can be easily obtained by applying the classical convex optimization methods. The numerical examples are given to verify the main conclusions.
Lead isotopic studies of lunar soils - Their bearing on the time scale of agglutinate formation
Church, S. E.; Tilton, G. R.; Chen, J. H.
1976-01-01
Fines (smaller than 75 microns) and bulk soil were studied to analyze loss of volatile lead; losses of the order of 10% to 30% radiogenic lead during the production of agglutinates are assessed. Lead isotope data from fine-agglutinate pairs are analyzed for information on the time scale of micrometeorite bombardment, from the chords generated by the data in concordia diagrams. Resulting mean lead loss ages were compared to spallogenic gas exposure ages for all samples. Labile parentless radiogenic Pb residing preferentially on or in the fines is viewed as possibly responsible for aberrant lead loss ages. Bulk soils plot above the concordia curve (in a field of excess radiogenic Pb) for all samples with anomalous ages.
Optimising seasonal streamflow forecast lead time for operational decision making in Australia
Schepen, Andrew; Zhao, Tongtiegang; Wang, Q. J.; Zhou, Senlin; Feikema, Paul
2016-10-01
Statistical seasonal forecasts of 3-month streamflow totals are released in Australia by the Bureau of Meteorology and updated on a monthly basis. The forecasts are often released in the second week of the forecast period, due to the onerous forecast production process. The current service relies on models built using data for complete calendar months, meaning the forecast production process cannot begin until the first day of the forecast period. Somehow, the bureau needs to transition to a service that provides forecasts before the beginning of the forecast period; timelier forecast release will become critical as sub-seasonal (monthly) forecasts are developed. Increasing the forecast lead time to one month ahead is not considered a viable option for Australian catchments that typically lack any predictability associated with snowmelt. The bureau's forecasts are built around Bayesian joint probability models that have antecedent streamflow, rainfall and climate indices as predictors. In this study, we adapt the modelling approach so that forecasts have any number of days of lead time. Daily streamflow and sea surface temperatures are used to develop predictors based on 28-day sliding windows. Forecasts are produced for 23 forecast locations with 0-14- and 21-day lead time. The forecasts are assessed in terms of continuous ranked probability score (CRPS) skill score and reliability metrics. CRPS skill scores, on average, reduce monotonically with increase in days of lead time, although both positive and negative differences are observed. Considering only skilful forecast locations, CRPS skill scores at 7-day lead time are reduced on average by 4 percentage points, with differences largely contained within +5 to -15 percentage points. A flexible forecasting system that allows for any number of days of lead time could benefit Australian seasonal streamflow forecast users by allowing more time for forecasts to be disseminated, comprehended and made use of prior to
ANALYSIS AND IMPROVEMENT OF LEAD TIME FOR JOB SHOP UNDER MIXED PRODUCTION SYSTEM
CHE Jianguo; HE Zhen; EDWARB M Knod
2006-01-01
Firstly an overview of the potential impact on work-in-process (WIP) and lead time is provided when transfer lot sizes are undifferentiated from processing lot sizes. Simple performance examples are compared to those from a shop with one-piece transfer lots. Next, a mathematical programming model for minimizing lead time in the mixed-model job shop is presented, in which one-piece transfer lots are used. Key factors affecting lead time are found by analyzing the sum of the longest setup time of individual items among the shared processes (SLST) and the longest processing time of individual items among processes (LPT). And lead time can be minimized by cutting down the SLST and LPT. Reduction of the SLST is described as a traveling salesman problem (TSP), and the minimum of the SLST is solved through job shop scheduling. Removing the bottleneck and leveling the production line optimize the LPT. If the number of items produced is small, the routings are relatively short, and items and facilities are changed infrequently, the optimal schedule will remain valid. Finally a brief example serves to illustrate the method.
The impact of product configurators on lead times in engineering-oriented companies
Haug, Anders; Hvam, Lars; Mortensen, Niels Henrik
2011-01-01
vague when reporting the effects of configurator projects. Only six cases were identified, which provide estimates of the actual size of lead time reduction achieved from product configurators. To broaden this knowledge, this paper presents the results of a study of 14 companies concerning the impact...... of product configurators on business processes related to the creation of quotes and detailed product specifications. The study documents impressive results of the application of configurator technology. For example, in the data retrieved the use of configurators was estimated to have implied up to a 99.......9% reduction of the quotation lead time with an average estimated reduction of 85.5%....
Overestimated lead times in cancer screening has led to substantial underestimation of overdiagnosis
Zahl, P-H; Juhl Jørgensen, Karsten; Gøtzsche, P C
2013-01-01
Published lead time estimates in breast cancer screening vary from 1 to 7 years and the percentages of overdiagnosis vary from 0 to 75%. The differences are usually explained as random variations. We study how much can be explained by using different definitions and methods.......Published lead time estimates in breast cancer screening vary from 1 to 7 years and the percentages of overdiagnosis vary from 0 to 75%. The differences are usually explained as random variations. We study how much can be explained by using different definitions and methods....
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
A System for Continuous Hydrological Ensemble Forecasting (SCHEF) to lead times of 9 days
Bennett, James C.; Robertson, David E.; Shrestha, Durga Lal; Wang, Q. J.; Enever, David; Hapuarachchi, Prasantha; Tuteja, Narendra K.
2014-11-01
This study describes a System for Continuous Hydrological Ensemble Forecasting (SCHEF) designed to forecast streamflows to lead times of 9 days. SCHEF is intended to support optimal management of water resources for consumptive and environmental purposes and ultimately to support the management of impending floods. Deterministic rainfall forecasts from the ACCESS-G numerical weather prediction (NWP) model are post-processed using a Bayesian joint probability model to correct biases and quantify uncertainty. Realistic temporal and spatial characteristics are instilled in the rainfall forecast ensemble with the Schaake shuffle. The ensemble rainfall forecasts are then used as inputs to the GR4H hydrological model to produce streamflow forecasts. A hydrological error correction is applied to ensure forecasts transit smoothly from recent streamflow observations. SCHEF forecasts streamflows skilfully for a range of hydrological and climate conditions. Skill is particularly evident in forecasts of streamflows at lead times of 1-6 days. Forecasts perform best in temperate perennially flowing rivers, while forecasts are poorest in intermittently flowing rivers. The poor streamflow forecasts in intermittent rivers are primarily the result of poor rainfall forecasts, rather than an inadequate representation of hydrological processes. Forecast uncertainty becomes more reliably quantified at longer lead times; however there is considerable scope for improving the reliability of streamflow forecasts at all lead times. Additionally, we show that the choice of forecast time-step can influence forecast accuracy: forecasts generated at a 1-h time-step tend to be more accurate than at longer time-steps (e.g. 1-day). This is largely because at shorter time-steps the hydrological error correction is able to correct streamflow forecasts with more recent information, rather than the ability of GR4H to simulate hydrological processes better at shorter time-steps. SCHEF will form the
Naglaa H. El-Sodany
2011-01-01
Full Text Available Problem statement: This study treats the probabilistic safety stock n-items inventory system having varying holding cost and zero lead-time subject to linear constraint. Approach: The expected total cost is composed of three components: the average purchase cost; the expected order cost and the expected holding cost. Results: The policy variables for this model are the number of periods N*r and the optimal maximum inventory level Q*mr and the minimum expected total cost. Conclusion/Recommendations: We can obtain the optimal values of these policy variables by using the geometric programming approach. A special case is deduced and an illustrative numerical example is added.
The effects of lead time and visual aids in TTO valuation: a study of the EQ-VT framework
N. Luo (Nan); M. Li (Minghui); E.A. Stolk (Elly); N. Devlin (Nancy)
2013-01-01
markdownabstract__Abstract__ __Background__ The effect of lead time in time trade-off (TTO) valuation is not well understood. The purpose of this study was to investigate the effects on health-state valuation of the length of lead time and the way the lead-time TTO task is displayed visually.
Reconstructing the life-time lead exposure in children using dentine in deciduous teeth
Shepherd, Thomas J., E-mail: shepherdtj@aol.com [School of Earth and Environment, University of Leeds, Leeds LS2 9JT (United Kingdom); Dirks, Wendy [Centre for Oral Health Research, School of Dental Sciences, Newcastle University, Newcastle upon Tyne NE2 4BW (United Kingdom); Manmee, Charuwan; Hodgson, Susan [Institute of Health and Society, Newcastle University, Newcastle upon Tyne NE2 4AX (United Kingdom); Banks, David A. [School of Earth and Environment, University of Leeds, Leeds LS2 9JT (United Kingdom); Averley, Paul [Centre for Oral Health Research, School of Dental Sciences, Newcastle University, Newcastle upon Tyne NE2 4BW (United Kingdom); Queensway Dental Practice, 170 Queensway, Billingham, Teesside TS23 2NT (United Kingdom); Pless-Mulloli, Tanja [Institute of Health and Society, Newcastle University, Newcastle upon Tyne NE2 4AX (United Kingdom); Newcastle Institute for Research on Sustainability, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom)
2012-05-15
Data are presented to demonstrate that the circumpulpal dentine of deciduous teeth can be used to reconstruct a detailed record of childhood exposure to lead. By combining high spatial resolution laser ablation ICP-MS with dental histology, information was acquired on the concentration of lead in dentine from in utero to several years after birth, using a true time template of dentine growth. Time corrected lead analyses for pairs of deciduous molars confirmed that between-tooth variation for the same child was negligible and that meaningful exposure histories can be obtained from a single, multi-point ablation transect on longitudinal sections of individual teeth. For a laser beam of 100 {mu}m diameter, the lead signal for each ablation point represented a time span of 42 days. Simultaneous analyses for Sr, Zn and Mg suggest that the incorporation of Pb into dentine (carbonated apatite) is most likely controlled by nanocrystal growth mechanisms. The study also highlights the importance of discriminating between primary and secondary dentine and the dangers of translating lead analyses into blood lead estimates without determining the age or duration of dentine sampled. Further work is in progress to validate deciduous teeth as blood lead biomarkers. - Highlights: Black-Right-Pointing-Pointer Reconstruction of childhood exposure history to Pb using deciduous tooth dentine. Black-Right-Pointing-Pointer Pb analyses acquired for dentine growth increments of 42 days. Black-Right-Pointing-Pointer Highly correlated Pb concentration profiles for pairs of deciduous molars. Black-Right-Pointing-Pointer Data for Sr, Zn and Mg provide a model for the incorporation of Pb into dentine.
Real-Time Observation of Organic Cation Reorientation in Methylammonium Lead Iodide Perovskites
Bakulin, Artem A.; Selig, Oleg; Bakker, Huib J.; Rezus, Yves L. A.; Mueller, Christian; Glaser, Tobias; Lovrincic, Robert; Sun, Zhenhua; Chen, Zhuoying; Walsh, Aron; Frost, Jarvist M.; Jansen, Thomas L. C.
2015-01-01
The introduction of a mobile and polarized organic moiety as a cation in 3D lead-iodide perovskites brings fascinating optoelectronic properties to these materials. The extent and the time scales of the orientational mobility of the organic cation and the molecular mechanism behind its motion remain
Inventory control in multi-echelon divergent systems with random lead times
van der Heijden, Matthijs C.; Diks, Erik; de Kok, Ton
1999-01-01
This paper deals with integral inventory control in multi-echelon divergent systems with stochastic lead times. The policy considered is an echelon stock, periodic review, order-up-to (R, S) policy. A computational method is derived to obtain the order-up-to level and the allocation fractions
Integration of capacity, pricing, and lead-time decisions in a decentralized supply chain
Zhu, Stuart X.
We consider a decentralized supply chain consisting of a supplier and a retailer facing price- and lead-time-sensitive demand. The decision process is modelled by a Stackelberg game where the supplier, as a leader, determines the capacity and the wholesale price, and the retailer, as a follower,
Use of Six Sigma Methodology to Reduce Appointment Lead-Time in Obstetrics Outpatient Department.
Ortiz Barrios, Miguel A; Felizzola Jiménez, Heriberto
2016-10-01
This paper focuses on the issue of longer appointment lead-time in the obstetrics outpatient department of a maternal-child hospital in Colombia. Because of extended appointment lead-time, women with high-risk pregnancy could develop severe complications in their health status and put their babies at risk. This problem was detected through a project selection process explained in this article and to solve it, Six Sigma methodology has been used. First, the process was defined through a SIPOC diagram to identify its input and output variables. Second, six sigma performance indicators were calculated to establish the process baseline. Then, a fishbone diagram was used to determine the possible causes of the problem. These causes were validated with the aid of correlation analysis and other statistical tools. Later, improvement strategies were designed to reduce appointment lead-time in this department. Project results evidenced that average appointment lead-time reduced from 6,89 days to 4,08 days and the deviation standard dropped from 1,57 days to 1,24 days. In this way, the hospital will serve pregnant women faster, which represents a risk reduction of perinatal and maternal mortality.
S. S. Mishra
2008-01-01
Full Text Available A probabilistic inventory model for conditional credit period with exponential demand, non-zero lead time and multiple storage facility has been developed. The behaviour of total expected cost (TEC has been examined and the use and application of the model is demonstrated with the help of a numerical example.
Integration of capacity, pricing, and lead-time decisions in a decentralized supply chain
Zhu, Stuart X.
2015-01-01
We consider a decentralized supply chain consisting of a supplier and a retailer facing price- and lead-time-sensitive demand. The decision process is modelled by a Stackelberg game where the supplier, as a leader, determines the capacity and the wholesale price, and the retailer, as a follower, det
Rossi, R.; Tarim, S.A.; Hnich, B.; Prestwich, S.
2010-01-01
In this paper we address the general multi-period production/inventory problem with non-stationary stochastic demand and supplier lead-time under service level constraints. A replenishment cycle policy (Rn,Sn) is modeled, where Rn is the nth replenishment cycle length and Sn is the respective order-
Ming-Feng Yang
2016-01-01
Full Text Available Nowadays, in order to achieve advantages in supply chain management, how to keep inventory in adequate level and how to enhance customer service level are two critical practices for decision makers. Generally, uncertain lead time and defective products have much to do with inventory and service level. Therefore, this study mainly aims at developing a multiechelon integrated just-in-time inventory model with uncertain lead time and imperfect quality to enhance the benefits of the logistics model. In addition, the Ant Colony Algorithm (ACA is established to determine the optimal solutions. Moreover, based on our proposed model and analysis, the ACA is more efficient than Particle Swarm Optimization (PSO and Lingo in SMEIJI model. An example is provided in this study to illustrate how production run and defective rate have an effect on system costs. Finally, the results of our research could provide some managerial insights which support decision makers in real-world operations.
Inventory Model (M,T) With Quadratic Backorder Costs And Continuous Lead Time Series 1
Dr. Martin Osawaru Omorodion
2014-01-01
We have assumed in this paper that demand follows a normal distribution, and lead times follow a gamma distribution and backorder costs is quadratic. The (M, T) Model an order is made to bring it to M at review time. The model is derived from the (nQ,R,T) model which at review time an integral multiple Q is ordered. After deriving the inventory costs for (nQ, R,T) we set Q 0 to obtain the (M,T) inventory costs by making use of the differentiation of the (nQ,R,T) model. The (M,T) inventory cos...
Djeison Cesar Batista
2011-09-01
Full Text Available Thermal rectification of wood was developed in the decade of 1940 and has been largely studied and produced in Europe. In Brazil, the research about this technique is still little and sparse, but it has gained attention nowadays. The aim of this study was to evaluate the influence of time and temperature of rectification on the reduction of maximum swelling of Eucalyptus grandis wood. According to the results obtained it is possible to achieve reductions of about 50% on the maximum volumetric swelling of Eucalyptus grandis wood. Best results were obtained for 230°C of thermal rectification rather than 200°C. The factor temperature was more significant than time, once that there was no significant difference between the times used (1, 2 and 3 hours. There was no significant interaction between the factors time and temperature.
Johansen, Søren Glud; Thorstenson, Anders
2008-01-01
We extend well-known formulae for the optimal base stock of the inventory system with continuous review and constant lead time to the case with periodic review and stochastic, sequential lead times. Our extension uses the notion of the 'extended lead time'. The derived performance measures...
The ρ-meson time-like form factors in sub-leading pQCD
de Melo, J. P. B. C.; Ji, Chueng-Ryong; Frederico, T.
2016-12-01
The annihilation/production process e+ +e- →ρ+ +ρ- is studied with respect to the universal perturbative QCD (pQCD) predictions. Sub-leading contributions are considered together with the universal leading pQCD amplitudes such that the matrix elements of the ρ-meson electromagnetic current satisfy the constraint from the light-front angular condition. The data from the BaBar collaboration for the time-like ρ-meson form factors at √{ s} = 10.58 GeV puts a stringent test to the onset of asymptotic pQCD behavior. The e+ +e- →ρ+ +ρ- cross-section for s between 60 GeV2 and 160 GeV2 is predicted where the sub-leading contributions are still considerable.
The ρ-meson time-like form factors in sub-leading pQCD
J.P.B.C. de Melo
2016-12-01
Full Text Available The annihilation/production process e++e−→ρ++ρ− is studied with respect to the universal perturbative QCD (pQCD predictions. Sub-leading contributions are considered together with the universal leading pQCD amplitudes such that the matrix elements of the ρ-meson electromagnetic current satisfy the constraint from the light-front angular condition. The data from the BaBar collaboration for the time-like ρ-meson form factors at s=10.58 GeV puts a stringent test to the onset of asymptotic pQCD behavior. The e++e−→ρ++ρ− cross-section for s between 60 GeV2 and 160 GeV2 is predicted where the sub-leading contributions are still considerable.
Stochastic integrated vendor–buyer model with unstable lead time and setup cost
Chandra K. Jaggi
2011-01-01
Full Text Available This paper presents a new vendor-buyer system where there are different objectives for both sides. The proposed method of this paper is different from the other previously published works since it considers different objectives for both sides. In this paper, the vendor’s emphasis is on the crashing of the setup cost, which not only helps him compete in the market but also provides better services to his customers; and the buyer’s aim is to reduce the lead time, which not only facilitates the buyer to fulfill the customers’ demand on time but also enables him to earn a good reputation in the market or vice versa. In the light of the above stated facts, an integrated vendor-buyer stochastic inventory model is also developed. The propsed model considers two cases for demand during lead time: Case (i Complete demand information, Case (ii Partial demand information. The proposed model jointly optimizes the buyer’s ordered quantity and lead time along with vendor’s setup cost and the number of shipments. The results are demonstrated with the help of numerical examples.
Faicel HNAIEN; Alexandre DOLGUI; Mohamed-Aly OULD LOULY
2008-01-01
This paper deals with the problem of planned lead time calculation in a Material Requirement Planning (MRP) environment under stochastic lead times. The objective is to minimize the sum of holding and backlogging costs. The proposed approach is based on discrete time inventory control where the decision variables are integer. Two types of systems are considered: multi-level serial-production and assembly systems. For the serial production systems (one type of component at each level), a mathematical model is suggested. Then, it is proven that this model is equivalent to the well known discrete Newsboy Model. This directly provides the optimal values for the planned lead times. For multilevel assembly systems, a dedicated model is proposed and some properties of the decision variables and objective function are proven. These properties are used to calculate lower and upper limits on the decision variables and lower and upper bounds on the objective function. The obtained limits and bounds open the possibility to develop an efficient optimization algorithm using, for example, a Branch and Bound approach. The paper presents the proposed models in detail with corresponding proofs and several numerical examples. Some advantages of the suggested models and perspectives of this research are discussed.
Verification of short lead time forecast models: applied to Kp and Dst forecasting
Wintoft, Peter; Wik, Magnus
2016-04-01
In the ongoing EU/H2020 project PROGRESS models that predicts Kp, Dst, and AE from L1 solar wind data will be used as inputs to radiation belt models. The possible lead times from L1 measurements are shorter (10s of minutes to hours) than the typical duration of the physical phenomena that should be forecast. Under these circumstances several metrics fail to single out trivial cases, such as persistence. In this work we explore metrics and approaches for short lead time forecasts. We apply these to current Kp and Dst forecast models. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 637302.
Chen Pоуu
2013-01-01
Full Text Available Products made overseas but sold in Taiwan are very common. Regarding the cross-border or interregional production and marketing of goods, inventory decision-makers often have to think about how to determine the amount of purchases per cycle, the number of transport vehicles, the working hours of each transport vehicle, and the delivery by ground or air transport to sales offices in order to minimize the total cost of the inventory in unit time. This model assumes that the amount of purchases for each order cycle should allow all rented vehicles to be fully loaded and the transport times to reach the upper limit within the time period. The main research findings of this study included the search for the optimal solution of the integer planning of the model and the results of sensitivity analysis.
Prashant Jindal
2016-06-01
Full Text Available For the past four decades the integrated vendor and buyer supply chain inventory model has been an interesting topic, but quality improvement of defective items in the integrated inventory model with backorder price discount involving controllable lead time has been rarely discussed. The aim of this paper is to minimize the total related cost in the continuous review model by considering the order quantity, reorder point, lead time, process quality, backorder price discount and number of shipment as decision variables. Moreover, we assume that an investment function is used to improve the process quality. The lead time demand follows a normal distribution. In addition, the buyer offers backorder price discount to motivate the customers for possible backorders. There are some defective items in the arrival lot, so its treatment is also taken in account in this paper. We develop an iterative procedure for finding the optimal values of decision variables and numerical example is presented to illustrate the solution procedure. Additionally, sensitivity analysis with respect to major parameters is also carried out.
Strategic Inventory Positioning in BOM with Multiple Parents Using ASR Lead Time
Jingjing Jiang
2016-01-01
Full Text Available In order to meet the lead time that the customers require, work-in-process inventory (WIPI is necessary at almost every station in most make-to-order manufacturing. Depending on the station network configuration and lead time at each station, some of the WIPI do not contribute to reducing the manufacturing lead time of the final product at all. Therefore, it is important to identify the optimal set of stations to hold WIPI such that the total inventory holding cost is minimized, while the required due date for the final product is met. The authors have presented a model to determine the optimal position and quantity of WIPI for a given simple bill of material (S-BOM, in which any part in the BOM has only one immediate parent node. In this paper, we extend the previous study to the general BOM (G-BOM in which parts in the BOM can have more than one immediate parent and present a new solution procedure using genetic algorithm.
Rimantho, Dino; Rahman, Tomy Abdul; Cahyadi, Bambang; Tina Hernawati, S.
2017-02-01
Calibration of instrumentation equipment in the pharmaceutical industry is an important activity to determine the true value of a measurement. Preliminary studies indicated that occur lead-time calibration resulted in disruption of production and laboratory activities. This study aimed to analyze the causes of lead-time calibration. Several methods used in this study such as, Six Sigma in order to determine the capability process of the calibration instrumentation of equipment. Furthermore, the method of brainstorming, Pareto diagrams, and Fishbone diagrams were used to identify and analyze the problems. Then, the method of Hierarchy Analytical Process (AHP) was used to create a hierarchical structure and prioritize problems. The results showed that the value of DPMO around 40769.23 which was equivalent to the level of sigma in calibration equipment approximately 3,24σ. This indicated the need for improvements in the calibration process. Furthermore, the determination of problem-solving strategies Lead Time Calibration such as, shortens the schedule preventive maintenance, increase the number of instrument Calibrators, and train personnel. Test results on the consistency of the whole matrix of pairwise comparisons and consistency test showed the value of hierarchy the CR below 0.1.
Enhancing Nursing Staffing Forecasting With Safety Stock Over Lead Time Modeling.
McNair, Douglas S
2015-01-01
In balancing competing priorities, it is essential that nursing staffing provide enough nurses to safely and effectively care for the patients. Mathematical models to predict optimal "safety stocks" have been routine in supply chain management for many years but have up to now not been applied in nursing workforce management. There are various aspects that exhibit similarities between the 2 disciplines, such as an evolving demand forecast according to acuity and the fact that provisioning "stock" to meet demand in a future period has nonzero variable lead time. Under assumptions about the forecasts (eg, the demand process is well fit as an autoregressive process) and about the labor supply process (≥1 shifts' lead time), we show that safety stock over lead time for such systems is effectively equivalent to the corresponding well-studied problem for systems with stationary demand bounds and base stock policies. Hence, we can apply existing models from supply chain analytics to find the optimal safety levels of nurse staffing. We use a case study with real data to demonstrate that there are significant benefits from the inclusion of the forecast process when determining the optimal safety stocks.
Leading effective virtual teams overcoming time and distance to achieve exceptional results
Settle-Murphy, Nancy M
2013-01-01
A proliferation of new technologies has lulled many into thinking that we actually have to think less about how we communicate. In fact, communicating and collaborating across time, distance, and cultures has never been more complex or difficult.Written as a series of bulleted tips drawn from client experiences and best practices, Leading Effective Virtual Teams: Overcoming Time and Distance to Achieve Exceptional Results presents practical tips to help leaders engage and motivate their geographically dispersed project team members. If you're a leader of any type of virtual team and want to he
The M/M/1 queue with inventory, lost sale and general lead times
Saffari, Mohammad; Asmussen, Søren; Haji, Rasoul
We consider an M/M/1 queueing system with inventory under the (r,Q) policy and with lost sales, in which demands occur according to a Poisson process and service times are exponentially distributed. All arriving customers during stockout are lost. We derive the stationary distributions of the joi...... queue length (number of customers in the system) and on-hand inventory when lead times are random variables and can take various distributions. The derived stationary distributions are used to formulate long-run average performance measures and cost functions in some numerical examples....
Green, Cynthia L; Brownie, Cavell; Boos, Dennis D; Lu, Jye-Chyi; Krucoff, Mitchell W
2016-04-01
We propose a novel likelihood method for analyzing time-to-event data when multiple events and multiple missing data intervals are possible prior to the first observed event for a given subject. This research is motivated by data obtained from a heart monitor used to track the recovery process of subjects experiencing an acute myocardial infarction. The time to first recovery, T1, is defined as the time when the ST-segment deviation first falls below 50% of the previous peak level. Estimation of T1 is complicated by data gaps during monitoring and the possibility that subjects can experience more than one recovery. If gaps occur prior to the first observed event, T, the first observed recovery may not be the subject's first recovery. We propose a parametric gap likelihood function conditional on the gap locations to estimate T1 Standard failure time methods that do not fully utilize the data are compared to the gap likelihood method by analyzing data from an actual study and by simulation. The proposed gap likelihood method is shown to be more efficient and less biased than interval censoring and more efficient than right censoring if data gaps occur early in the monitoring process or are short in duration.
Analog filtering methods improve leading edge timing performance of multiplexed SiPMs
Bieniosek, M. F.; Cates, J. W.; Grant, A. M.; Levin, C. S.
2016-08-01
Multiplexing many SiPMs to a single readout channel is an attractive option to reduce the readout complexity of high performance time of flight (TOF) PET systems. However, the additional dark counts and shaping from each SiPM cause significant baseline fluctuations in the output waveform, degrading timing measurements using a leading edge threshold. This work proposes the use of a simple analog filtering network to reduce the baseline fluctuations in highly multiplexed SiPM readouts. With 16 SiPMs multiplexed, the FWHM coincident timing resolution for single 3~\\text{mm}× 3~\\text{mm}× 20 mm LYSO crystals was improved from 401 ± 4 ps without filtering to 248 ± 5 ps with filtering. With 4 SiPMs multiplexed, using an array of 3~\\text{mm}× 3~\\text{mm}× 20 mm LFS crystals the mean time resolution was improved from 436 ± 6 ps to 249 ± 2 ps. Position information was acquired with a novel binary positioning network. All experiments were performed at room temperature with no active temperature regulation. These results show a promising technique for the construction of high performance multiplexed TOF PET readout systems using analog leading edge timing pickoff.
Katrin Vogt
Full Text Available Animals need to associate different environmental stimuli with each other regardless of whether they temporally overlap or not. Drosophila melanogaster displays olfactory trace conditioning, where an odor is followed by electric shock reinforcement after a temporal gap, leading to conditioned odor avoidance. Reversing the stimulus timing in olfactory conditioning results in the reversal of memory valence such that an odor that follows shock is later on approached (i.e. relief conditioning. Here, we explored the effects of stimulus timing on memory in another sensory modality, using a visual conditioning paradigm. We found that flies form visual memories of opposite valence depending on stimulus timing and can associate a visual stimulus with reinforcement despite being presented with a temporal gap. These results suggest that associative memories with non-overlapping stimuli and the effect of stimulus timing on memory valence are shared across sensory modalities.
Structural and time resolved emission spectra of Er 3+: Silver lead borate glass
Coelho, João; Hungerford, Graham; Hussain, N. Sooraj
2011-08-01
The structural properties of Er 3+: silver lead borate glass is assessed by means of SEM, X-ray mapping, EDS and Raman analysis. In order to verify the time dependency of emission spectra, steady-state luminescence spectroscopy (SSLS) and time-resolved emission spectroscopy (TRES) studies have been performed. The stimulated emission cross-sections for the NIR emission transition 4I 13/2 → 4I 15/2 (1535 nm) at 970 nm excitation are reported. The decay times were obtained by fitting one ( τm = 0.301 ms) and two ( τm1 = 0.141 ms, τm2 = 0.368 ms) distributions for the NIR transition. Furthermore, by making use of TRES measurements the decay associated spectra were obtained allowing the time dependency for the different emission bands to be elucidated.
Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert
We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...
FENG Guolin; DONG Wenjie; GAO Hongxing
2005-01-01
The time-dependent solution of reduced air-sea coupling stochastic-dynamic model is accurately obtained by using the Fokker-Planck equation and the quantum mechanical method. The analysis of the timedependent solution suggests that when the climate system is in the ground state, the behavior of the system appears to be a Brownian movement, thus reasoning the foothold of Hasselmann's stochastic climatic model;when the system is in the first excitation state, the motion of the system exhibits a form of time-decaying,or under certain condition a periodic oscillation with the main period being 2.3 yr. At last, the results are used to discuss the impact of the doubling of carbon dioxide on climate.
2013-08-01
2 x Dose (2) CAMI (3) Medication Max Hrs Hrs Half-lives Interv Hrs Half-lives Eq Hrs Half-lives Codeine 4.0 24 6.0 8.0 2.0 15 3.6 Morphine 7.0 24...return-to-duty time, even for individuals on the extreme metabolic margins of the general population. The variation in t½ (calculated by the CAMI
Furbish, David J.; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan L.
2016-01-01
We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.
Shao, Liyang; Zhang, Lianjun; Zhen, Zhen
2017-01-01
Children's blood lead concentrations have been closely monitored over the last two decades in the United States. The bio-monitoring surveillance data collected in local agencies reflected the local temporal trends of children's blood lead levels (BLLs). However, the analysis and modeling of the long-term time series of BLLs have rarely been reported. We attempted to quantify the long-term trends of children's BLLs in the city of Syracuse, New York and evaluate the impacts of local lead poisoning prevention programs and Lead Hazard Control Program on reducing the children's BLLs. We applied interrupted time series analysis on the monthly time series of BLLs surveillance data and used ARMA (autoregressive and moving average) models to measure the average children's blood lead level shift and detect the seasonal pattern change. Our results showed that there were three intervention stages over the past 20 years to reduce children's BLLs in the city of Syracuse, NY. The average of children's BLLs was significantly decreased after the interventions, declining from 8.77μg/dL to 3.94μg/dL during1992 to 2011. The seasonal variation diminished over the past decade, but more short term influences were in the variation. The lead hazard control treatment intervention proved effective in reducing the children's blood lead levels in Syracuse, NY. Also, the reduction of the seasonal variation of children's BLLs reflected the impacts of the local lead-based paint mitigation program. The replacement of window and door was the major cost of lead house abatement. However, soil lead was not considered a major source of lead hazard in our analysis.
Johansen, Søren Glud; Thorstenson, Anders
2008-01-01
We extend well-known formulae for the optimal base stock of the inventory system with continuous review and constant lead time to the case with periodic review and stochastic, sequential lead times. Our extension uses the notion of the ‘extended lead time’. The derived performance measures...
Johansen, Søren Glud; Thorstenson, Anders
2008-01-01
We extend well-known formulae for the optimal base stock of the inventory system with continuous review and constant lead time to the case with periodic review and stochastic, sequential lead times. Our extension uses the notion of the ‘extended lead time’. The derived performance measures...
Shaolin Ji
2012-01-01
Full Text Available We study the optimal control problem of a controlled time-symmetric forward-backward doubly stochastic differential equation with initial-terminal state constraints. Applying the terminal perturbation method and Ekeland’s variation principle, a necessary condition of the stochastic optimal control, that is, stochastic maximum principle, is derived. Applications to backward doubly stochastic linear-quadratic control models are investigated.
Liu, Weidong
2009-01-01
In this paper, Cram\\'{e}r type moderate deviations for the maximum of the periodogram and its studentized version are derived. The results are then applied to a simultaneous testing problem in gene expression time series. It is shown that the level of the simultaneous tests is accurate provided that the number of genes $G$ and the sample size $n$ satisfy $G=\\exp(o(n^{1/3}))$.
Svendsen, Morten B S; Domenici, Paolo; Marras, Stefano;
2016-01-01
, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s(-1)), followed by barracuda (6.2±1.0 m s(-1)), little tunny (5.6±0.2 m s(-1)) and dorado (4...
D. RAMA REDDY
2012-07-01
Full Text Available This paper describes the stability regions of PID (Proportional +Integral+ Derivative and a new PID with series leading correction (SLC for Networked control system with time delay. The new PID controller has a tuning parameter ‘β’. The relation between β, KP, KI and KD is derived. The effect of plant parameters on stabilityregion of PID controllers and SLC-PID controllers in first-order and second-order systems with time delay are also studied. Finally, an open-loop zero was inserted into the plant-unstable second order system with time delay so that the stability regions of PID and SLC-PID controllers get effectively enlarged. The total system isimplemented using MATLAB/Simulink.
Kodner Robin B
2010-10-01
Full Text Available Abstract Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service.
PRODUCTION INVENTORY SYSTEM WITH RANDOM SUPPLY INTERRUPTIONS STATUE AND RANDOM LEAD TIMES
Hou Yumei; Liu Wenyuan; Zhang Qiang; Wu Fengqing
2011-01-01
This article analyzes a continuous-review inventory system with random supply interruptions and random lead time which may be interrupted by a random number of supplier's OFF periods. The inventory with constant demand rate is managed by a (r; q1, q2,..., qm) policy and supplies from an unreliable sole supplier. By renewal theory and matrix Geometric method, the long-run average cost function is obtained and some important properties of the function are proved. Furthermore, performance of the inventory is derived.
Grove, R. D.; Bowles, R. L.; Mayhew, S. C.
1972-01-01
A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.
New real-time heartbeat detection method using the angle of a single-lead electrocardiogram.
Song, Mi-Hye; Cho, Sung-Pil; Kim, Wonky; Lee, Kyoung-Joung
2015-04-01
This study presents a new real-time heartbeat detection algorithm using the geometric angle between two consecutive samples of single-lead electrocardiogram (ECG) signals. The angle was adopted as a new index representing the slope of ECG signal. The method consists of three steps: elimination of high-frequency noise, calculation of the angle of ECG signal, and detection of R-waves using a simple adaptive thresholding technique. The MIT-BIH arrhythmia database, QT database, European ST-T database, T-wave alternans database and synthesized ECG signals were used to evaluate the performance of the proposed algorithm and compare with the results of other methods suggested in literature. The proposed method shows a high detection rate-99.95% of the sensitivity, 99.95% of the positive predictivity, and 0.10% of the fail detection rate on the four databases. The result shows that the proposed method can yield better or comparable performance than other literature despite the relatively simple process. The proposed algorithm needs only a single-lead ECG, and involves a simple and quick calculation. Moreover, it does not require post-processing to enhance the detection. Thus, it can be effectively applied to various real-time healthcare and medical devices.
Ronzhin, A., E-mail: ronzhin@fnal.gov [Fermilab, Batavia, Il 60510 (United States); Los, S.; Ramberg, E. [Fermilab, Batavia, Il 60510 (United States); Apresyan, A.; Xie, S.; Spiropulu, M. [California Institute of Technology, Pasadena, CA 91126 (United States); Kim, H. [University of Chicago, Chicago, Il 60637 (United States)
2015-09-21
We continue the study of micro-channel plate photomultiplier (MCP-PMT) as the active element of a shower maximum (SM) detector. We present test beam results obtained with Photek 240 and Photonis XP85011 MCP-PMTs devices. For proton beams, we obtained a time resolution of 9.6 ps, representing a significant improvement over past results using the same time of flight system. For electron beams, the time resolution obtained for this new type of SM detector is measured to be at the level of 13 ps when we use Photek 240 as the active element of the SM. Using the Photonis XP85011 MCP-PMT as the active element of the SM, we performed time resolution measurements with pixel readout, and achieved a TR better than 30 ps, The pixel readout was observed to improve upon the TR compared to the case where the individual channels were summed.
Luciana Torres Correia de Mello; Hugo Carlos Mansano Dornfeld; Givaldo Guilherme dos Santos; Débora Passos; Rafael Ribeiro; Moacir Godinho Filho
2016-01-01
.... The aim of this study was to analyze the lead time of logistics processes of a flower retailer network with headquarters in Campinas/SP using the tool Manufacturing Critical-path Time of the Quick...
Morten B. S. Svendsen
2016-10-01
Full Text Available Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s−1 but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s−1, followed by barracuda (6.2±1.0 m s−1, little tunny (5.6±0.2 m s−1 and dorado (4.0±0.9 m s−1; although size-corrected performance was highest in little tunny and lowest in sailfish. Contrary to previously reported estimates, our results suggest that sailfish are incapable of exceeding swimming speeds of 10-15 m s−1, which corresponds to the speed at which cavitation is predicted to occur, with destructive consequences for fin tissues.
Svendsen, Morten B. S.; Domenici, Paolo; Marras, Stefano; Krause, Jens; Boswell, Kevin M.; Rodriguez-Pinto, Ivan; Wilson, Alexander D. M.; Kurvers, Ralf H. J. M.; Viblanc, Paul E.; Finger, Jean S.; Steffensen, John F.
2016-01-01
ABSTRACT Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s−1 but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s−1), followed by barracuda (6.2±1.0 m s−1), little tunny (5.6±0.2 m s−1) and dorado (4.0±0.9 m s−1); although size-corrected performance was highest in little tunny and lowest in sailfish. Contrary to previously reported estimates, our results suggest that sailfish are incapable of exceeding swimming speeds of 10-15 m s−1, which corresponds to the speed at which cavitation is predicted to occur, with destructive consequences for fin tissues. PMID:27543056
Kotb A.E.H.M. Kotb
2011-01-01
Full Text Available Problem statement: In this study, we provide a simple method to determine the inventory policy of probabilistic single-item Economic Order Quantity (EOQ model, that has varying order cost and zero lead time. The model is restricted to the expected holding cost and the expected available limited storage space. Approach: The annual expected total cost is composed of three components (expected purchase cost, expected ordering cost and expected holding cost. The problem is then solved using a modified Geometric Programming method (GP. Results: Using the annual expected total cost to determine the optimal solutions, number of periods, maximum inventory level and minimum expected total cost per period. A classical model is derived and numerical example is solved to confirm the model. Conclusion/Recommendations: The results indicated the total cost decreased with changes in optimal solutions. Possible future extension of this model was include continuous decreasing ordering function of the number of periods and introducing expected annual demand rate as a decision variable.
Esposito, Rosario; Mensitieri, Giuseppe; de Nicola, Sergio
2015-12-21
A new algorithm based on the Maximum Entropy Method (MEM) is proposed for recovering both the lifetime distribution and the zero-time shift from time-resolved fluorescence decay intensities. The developed algorithm allows the analysis of complex time decays through an iterative scheme based on entropy maximization and the Brent method to determine the minimum of the reduced chi-squared value as a function of the zero-time shift. The accuracy of this algorithm has been assessed through comparisons with simulated fluorescence decays both of multi-exponential and broad lifetime distributions for different values of the zero-time shift. The method is capable of recovering the zero-time shift with an accuracy greater than 0.2% over a time range of 2000 ps. The center and the width of the lifetime distributions are retrieved with relative discrepancies that are lower than 0.1% and 1% for the multi-exponential and continuous lifetime distributions, respectively. The MEM algorithm is experimentally validated by applying the method to fluorescence measurements of the time decays of the flavin adenine dinucleotide (FAD).
Random Lead Time of the acute ghrelin response to a psychological stress
Geetha.T
2014-12-01
Full Text Available Ghrelin is a growth hormone and cortisol secretagogue that plays an important role in appetite and weight regulation. It is not known whether ghrelin is involved in the eating response to stress in humans. In the present study we examined the effects of psychologically induced stress on plasma ghrelin levels in patients with bingeeating disorder (BED and in healthy subjects of normal or increased body mass index (BMI. Volunteers were subjected to the standardized trier social stress test (TSST. Basal ghrelin levels in patients were at an intermediate level between thin and healthy obese subjects, but this difference did not attain statistical significance. There were no differences in ghrelin levels throughout the test among the groups after correction for BMI, age and gender. A significant difference in the trend time of ghrelin was revealed when the three groups were analyzed according to their cortisol response to stress. Ghrelin levels increased in cortisol responders whereas no change or a decrease in ghrelin levels occurred in cortisol non-responders. We also found Optimal time T*, Minimal Repair δ and Random Lead Time g to minimize the ghrelin level.
邵懿; 王君; 吴永宁
2014-01-01
目的：探讨我国食品中铅限量标准与国际接轨的程度，为我国食品中污染物限量标准完善提供参考。方法从标准涉及的食品类别和限量值两个方面来对比我国铅限量标准与国际食品法典委员会、欧盟、澳新制定的铅限量标准的异同。结果考虑到国际风险评估结果，我国基本对铅膳食暴露有贡献的食品都设置了限量值要求，因此我国标准涉及的食品种类要多于国际食品法典委员会、欧盟及澳新标准。但仍有个别食品的限量值较国际标准或其他国家标准宽松。结论建议加强对铅污染源头的治理，开展全国食品中铅污染情况的调研工作，为我国标准逐步完善打基础。%Objective To explore the extent of coincidence for lead concentration limits in food between China and Codex Alimentarius Commission (CAC) and provide evidence and reference for improving the Maximum Levels (MLs) of Contaminants in Foods. Methods Food categories and concentration limits for lead in China were compared with those of CAC, European Union, Australia and New Zealand. Results Con-sidering the international risk assessment result of lead, China almost set MLs for all the food that possibly contributes to the dietary exposure of lead, so the food categories for lead in China were more than those in CAC, the European Union and Australia and New Zealand standards. However, some MLs of lead in China are still looser than those in CAC or other countries. Conclusion The measures to control major contributing sources of lead in food and the comprehensive national survey of lead contamination in food should be taken, in order to lay the foundation for further improvement of the food contaminants standard in China.
Long lead-time flood forecasting using data-driven modeling approaches
Bhatia, N.; He, J.; Srivastav, R. K.
2014-12-01
In spite of numerous structure measures being taken for floods, accurate flood forecasting is essential to condense the damages in hazardous areas considerably. The need of producing more accurate flow forecasts motivates the researchers to develop advanced innovative methods. In this study, it is proposed to develop a hybrid neural network model to exploit the strengths of artificial neural networks (ANNs). The proposed model has two components: i.) Dual - ANN model developed using river flows; and ii.) Multiple Linear Regression (MLR) model trained on meteorological data (Rainfall and Snow on ground). Potential model inputs that best represent the process of river basin were selected in stepwise manner by identifying input-output relationship using a linear approach, Partial Correlation Input Selection (PCIS) combined with Akaike Information Criterion (AIC) technique. The presented hybrid model was compared with three conventional methods: i) Feed-forward artificial neural network (FF-ANN) using daily river flows; ii) FF-ANN applied on decomposed river flows (low flow, rising limb and falling limb of hydrograph); and iii) Recursive method for daily river flows with lead-time of 7 days. The applicability of the presented model is illustrated through daily river flow data of Bow River, Canada. Data from 1912 to 1976 were used to train the models while data from 1977 to 2006 were used to validate the models. The results of the study indicate that the proposed model is robust enough to capture the non-linear nature of hydrograph and proves to be highly promising to forecast peak flows (extreme values) well in advance (higher lead time).
Johansen, Søren Glud; Thorstenson, Anders
We show that well-known textbook formulae for determining the optimal base stock of the inventory system with continuous review and constant lead time can easily be extended to the case with periodic review and stochastic, sequential lead times. The provided performance measures and conditions...
Gian Paolo Beretta
2008-08-01
Full Text Available A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Beretta, Gian P.
2008-09-01
A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Tiena Gustina Amran
2012-06-01
Full Text Available The purpose of this research is to suggest the optimal raw material inventory system alternatives synchronized with the stock out characteristics and the condition can be occurred, the backorder raw material inventory control, beside that it also can reduce lead time and raw material ordering cost. In this case, the inventory models also extent (Q, R Inventory Model Under lead Time and Ordering Cost Reduction with lead time and ordering cost can be reduced. After the calculation, the optimal solution of inventory models can be obtained; those are with backorder condition which produced the annual inventory total cost for the company.
Yamamoto Y
2013-10-01
Full Text Available Background: The objective of this study is to provide certain data on clinical outcomes and their predictors of traditional maximum androgen blockade (MAB in prostate cancer with bone metastasis. Methods: Subjects were patients with prostate adenocarcinoma with bone metastasis initiated to treat with MAB as a primary treatment without any local therapy at our hospital between January 2003 and December 2010. Time to prostate specific antigen (PSA progression, overall survival (OS time, and association of clinical factors and outcomes were retrospectively evaluated. Results: A total of 57 patients were evaluable. The median age was 70 years. The median primary PSA was 203 ng/ml. Luteinizing hormone-releasing hormone agonists had been administered in 96.5% of the patients. Bicalutamide had been chosen in 89.4 % of the patients as the initial antiandrogen. The median time to PSA progression with MAB was 11.3 months (95% confidence interval [CI], 10.4 to 13.0. The median OS was 47.3 months (95% CI, 30.7 to 81.0. Gleason score 9 or greater, decline of PSA level equal to or higher than 1.0 ng/ml with MAB, and time to PSA nadir equal to or shorter than six months after initiation of MAB were independent risk factors for time to PSA progression (P=0.010, P=0.005, and P=0.001; respectively. Time to PSA nadir longer than six months was the only independent predictor for longer OS (HR, 0.255 [95% CI, 0.109 to 0.597]; P=0.002. Conclusions: Initial time to PSA nadir should be emphasized for clinical outcome analyses in future studies on prostate cancer with bone metastasis.
Bech, Joan; Berenguer, Marc
2014-05-01
Operational quantitative precipitation forecasts (QPF) are provided routinely by weather services or hydrological authorities, particularly those responsible for densely populated regions of small catchments, such as those typically found in Mediterranean areas prone to flash-floods. Specific rainfall values are used as thresholds for issuing warning levels considering different time frameworks (mid-range, short-range, 24h, 1h, etc.), for example 100 mm in 24h or 60 mm in 1h. There is a clear need to determine how feasible is a specific rainfall value for a given lead-time, in particular for very short range forecasts or nowcasts typically obtained from weather radar observations (Pierce et al 2012). In this study we assess which specific nowcast lead-times can be provided for a number of heavy precipitation events (HPE) that affected Catalonia (NE Spain). The nowcasting system we employed generates QPFs through the extrapolation of rainfall fields observed with weather radar following a Lagrangian approach developed and tested successfully in previous studies (Berenguer et al. 2005, 2011).Then QPFs up to 3h are compared with two quality controlled observational data sets: weather radar quantitative precipitation estimates (QPE) and raingauge data. Several high-impact weather HPE were selected including the 7 September 2005 Llobregat Delta river tornado outbreak (Bech et al. 2007) or the 2 November 2008 supercell tornadic thunderstorms (Bech et al. 2011) both producing, among other effects, local flash floods. In these two events there were torrential rainfall rates (30' amounts exceeding 38.2 and 12.3 mm respectively) and 24h accumulation values above 100 mm. A number of verification scores are used to characterize the evolution of precipitation forecast quality with time, which typically presents a decreasing trend but showing an strong dependence on the selected rainfall threshold and integration period. For example considering correlation factors, 30
Hua-Ming Song
2011-01-01
Full Text Available This paper investigates the ordering decisions and coordination mechanism for a distributed short-life-cycle supply chain. The objective is to maximize the whole supply chain's expected profit and meanwhile make the supply chain participants achieve a Pareto improvement. We treat lead time as a controllable variable, thus the demand forecast is dependent on lead time: the shorter lead time, the better forecast. Moreover, optimal decision-making models for lead time and order quantity are formulated and compared in the decentralized and centralized cases. Besides, a three-parameter contract is proposed to coordinate the supply chain and alleviate the double margin in the decentralized scenario. In addition, based on the analysis of the models, we develop an algorithmic procedure to find the optimal ordering decisions. Finally, a numerical example is also presented to illustrate the results.
Kalra, Ajay; Ahmad, Sajjad; Nayak, Anurag
2013-03-01
This study focuses on improving the spring-summer streamflow forecast lead time using large scale climate patterns. An artificial intelligence type data-driven model, Support Vector Machine (SVM), was developed incorporating oceanic-atmospheric oscillations to increase the forecast lead time. The application of SVM model is tested on three unimpaired gages in the North Platte River Basin. Seasonal averages of oceanic-atmospheric indices for the period of 1940-2007 are used to generate spring-summer streamflow volumes with 3-, 6- and 9-month lead times. The results reveal a strong association between coupled indices compared to their individual effects. The best streamflow estimates are obtained at 6-month compared to 3-month and 9-month lead times. The proposed modeling technique is expected to provide useful information to water managers and help in better managing the water resources and the operation of water systems.
[Lead exposure in the ceramic tile industry: time trends and current exposure levels].
Candela, S; Ferri, F; Olmi, M
1998-01-01
There is a high density of industries for the production of ceramic tiles in the District of Scandiano (province of Reggio Emilia, Emilia Romagna region). In this area, since the beginning of 1970s, the time trend of Pb exposure in ceramic tile plants has been evaluated by means of biological monitoring (BM) data collected at the Service of Prevention and Safety in the Work Environment and its associated Toxicology Laboratory. From these data, a clear decreasing time trend of exposure levels is documented, the reduction being more evident during the seventies and in 1985-88. During the seventies BM was introduced systematically in all ceramic tile plants with the determination of delta-aminolevulinic acid in urine (ALA-U). As a consequence of the BM programme, hygienic measures for the abatement of pollution inside the plants were implemented, and a reduction, from 20.6% to 2%, of ALA-U values exceeding 10 mg/l, was observed. In 1985, the determination of lead in blood (PbB) replaced that of ALA-U in the BM programmes and highlighted the persistence of high level of exposure to Pb, which could not be outlined by means of ALA-U because of its lower sensitivity. PbB levels were 36.1 micrograms/100 ml and 25.7 micrograms/100 ml in male and female workers, respectively. These results required the implementation, within the plants, of additional hygienic measures and a significant reduction of PbB was obtained in the following three years. In 1988 PbB levels were 26.0 +/- 10.7 and 21.6 +/- 10.3 micrograms/100 ml in male and female workers, respectively. In 1993-95 Pb levels were obtained from 1328 male and 771 female workers of 56 plants, accounting for about 40% of the total number of workers in the ceramic industry, in the zones of Sassuolo and Scandiano. Exposure levels are not different from those observed in the preceding years, with PbB levels of 25.3 +/- 11.1 and 19.1 +/- 9.2 micrograms/100 ml in male and female workers, respectively.
Han Shu-Xian
2015-01-01
The lead time compression is the core of supply chain management with time competition and the powerful source of competitive advantage of supply chain.As an emerging technology,The Internet of things is a huge network which combined Various Information sensing device (such as RFID,infrared sensors,global position system,communication device,etc.)with internet.So it can improve information sharing,cut logistical operations time and reduce lead-time.Base on the assumption that the market demand forecast accuracy varies with lead-time,this paper will establish the inventory model of the remanufacturing/manufacturing system and give the optimization algorithms of this mode.Finally,the conclusion is validated through a numerical example,proved the practicability of the model in practice.
Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY12 Status Report
Kulisek, Jonathan A.; Anderson, Kevin K.; Casella, Andrew M.; Siciliano, Edward R.; Warren, Glen A.
2012-09-28
Executive Summary Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today’s confirmatory methods. This document is a progress report for FY2012 PNNL analysis and algorithm development. Progress made by PNNL in FY2012 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel assemblies. PNNL further refined the semi-empirical model developed in FY2011 based on singular value decomposition (SVD) to numerically account for the effects of self-shielding. The average uncertainty in the Pu mass across the NGSI-64 fuel assemblies was shown to be less than 3% using only six calibration assemblies with a 2% uncertainty in the isotopic masses. When calibrated against the six NGSI-64 fuel assemblies, the algorithm was able to determine the total Pu mass within <2% uncertainty for the 27 diversion cases also developed under NGSI. Two purely empirical algorithms were developed that do not require the use of Pu isotopic fission chambers. The semi-empirical and purely empirical algorithms were successfully tested using MCNPX simulations as well applied to experimental data measured by RPI using their LSDS. The algorithms were able to describe the 235U masses of the RPI measurements with an average uncertainty of 2.3%. Analyses were conducted that provided valuable insight with regard to design requirements (e
Stephen Okyere
2015-07-01
Full Text Available Customers are becoming more attracted to quality service delivery and are being impatience and unsatisfied when they had to be delayed or wait for longer times before they are served. Hence, Quality Service Delivery is of utmost importance to every service organisation especially financial industry. Most financial institutions focus attention on product innovation at the expense of lead time management which is a major factor in ensuring service quality and customer satisfaction. Consequently, this research looks at evaluating the effect of lead time on quality service delivery in the Banking Industry in Kumasi Metropolis of Ghana. The study relied on Primary data collected through questionnaires, observation and interview instruments, administered to staff and customers of some selected branches of a commercial bank in the study area. The data was analysed qualitatively. The researchers realised that despite the immense importance of lead time on quality service delivery, little attention is given to the concept. It was revealed that, customers were dissatisfied with the commercial bank’s services as a result of the unnecessary delays and queuing at the bank premises. The long lead time was found to be attributable to plant/system failure, skill gap on the part of employees, ATM underutilization and frequent breakdowns, among others. This has consequently resulted into long lead time, waiting, queuing and unnecessary delay at the banking hall. It is recommended that Tellers should be provided with electronic card readers for verification of customer’s data and processing to be faster.
Chiba Shigeru
2007-09-01
Full Text Available Abstract Background Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. Methods An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index ρmax, which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. Results The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in ρmax with time. Conclusion The physiological index, ρmax, will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.
Svendsen, Morten Bo Søndergaard; Domenici, Paolo; Marras, Stefano
2016-01-01
Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s(-1) but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish...
Time dependence of corrosion in steels for use in lead-alloy cooled reactors
Machut, McLean [Department of Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, 1500 Engineering Dr., Madison, WI 53706 (United States)], E-mail: mtmachut@wisc.edu; Sridharan, Kumar [Department of Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, 1500 Engineering Dr., Madison, WI 53706 (United States); Li Ning [Materials Physics and Application Division, AFCI, Los Alamos National Laboratory, NM (United States); Ukai, Shigeharu [Division of Materials Science and Engineering, Hokaido University (Japan); Allen, Todd [Department of Nuclear Engineering and Engineering Physics, University of Wisconsin-Madison, 1500 Engineering Dr., Madison, WI 53706 (United States)
2007-09-15
Stability of the protective oxide layer is critical for the long-term performance of cladding and structural components in lead-alloy cooled nuclear systems. Measurements have shown that removal of the outer magnetite layer is a significant effect at higher temperatures in flowing lead-bismuth. Developing a predictive capability for oxide thickness and material removal is therefore needed. A model for the corrosion of steels in liquid lead-alloys has been employed to assist in materials development for application in the Generation IV Lead-cooled Fast Reactor (LFR). Data from corrosion tests of steels in Los Alamos National Laboratory's DELTA Loop is used to benchmark the model and to obtain predictions of long-term material's corrosion performance. The model is based on modifications of Wagner's diffusion based oxidation theory and Tedmon's equation for high-temperature oxidation with scale removal. Theoretically and experimentally obtained values for parabolic oxide growth rate, mass transfer corrosion rate, and long-term material thinning rates are presented and compared to the literature.
Taylor, Peter James; Gooding, Patricia A.; Wood, Alex M.; Johnson, Judith; Tarrier, Nicholas
2011-01-01
Theoretical perspectives into suicidality have suggested that heightened perceptions of defeat and entrapment lead to suicidality. However, all previous empirical work has been cross-sectional. We provide the first longitudinal test of the theoretical predictions, in a sample of 79 students who reported suicidality. Participants completed…
Jensen, Henry; Vedsted, Peter
2017-01-01
BACKGROUND: Implementation of standardised cancer patient pathways (CPPs) has provided faster diagnosis of cancer. Cancer survival has improved during the same time period. Concern has been raised that the faster diagnosis may have introduced lead-time bias by elongating the period from diagnosis...
Simulation of blast-induced, early-time intracranial wave physics leading to traumatic brain injury.
Taylor, Paul Allen; Ford, Corey C. (University of New Mexico, Albuquerque, NM)
2008-04-01
U.S. soldiers are surviving blast and impacts due to effective body armor, trauma evacuation and care. Blast injuries are the leading cause of traumatic brain injury (TBI) in military personnel returning from combat. Understanding of Primary Blast Injury may be needed to develop better means of blast mitigation strategies. The objective of this paper is to investigate the effects of blast direction and strength on the resulting mechanical stress and wave energy distributions generated in the brain.
Time-varying interaction leads to amplitude death in coupled nonlinear oscillators
Awadhesh Prasad
2013-09-01
A new form of time-varying interaction in coupled oscillators is introduced. In this interaction, each individual oscillator has always time-independent self-feedback while its interaction with other oscillators are modulated with time-varying function. This interaction gives rise to a phenomenon called amplitude death even in diffusively coupled identical oscillators. The nonlinear variation of the locus of bifurcation point is shown. Results are illustrated with Landau–Stuart (LS) and Rössler oscillators.
A Comparison of the Laplace Distribution with an Empirical Model of D062 Demand in Lead Time.
1981-09-01
D062) are based on formu- las originally developed by Presutti and Trepp (1970). These authors consider the problem of determining order quantities and...Presutti and Trepp + -.. .5 Ayb*,OMl ; r4g that demand in a lead time is normally distrib- uted. Howeve;I they then utilize the Laplace distribution to A...i.e. k denotes the number of standard deviations that the demand value x exceeds the expected demand in a lead time. Given (1), - Presutti and Trepp
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2003-01-01
Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)
Real-Time Observation of Organic Cation Reorientation in Methylammonium Lead Iodide Perovskites
2015-01-01
This document is the Accepted Manuscript version of a Published Work that appeared in final form in The Journal of Physical Chemistry Letters, copyright © American Chemical Society after peer review and technical editing by the publisher. To access the final edited and published work see http://pubs.acs.org/doi/abs/10.1021/acs.jpclett.5b01555 The introduction of a mobile and polarized organic moiety as a cation in 3D lead-iodide perovskites brings fascinating optoelectronic properties to t...
Novák, Martin; Emmanuel, Simon; Vile, Melanie A; Erel, Yigal; Véron, Alain; Paces, Tomás; Wieder, R Kelman; Vanecek, Mirko; Stepánová, Markéta; Brízová, Eva; Hovorka, Jan
2003-02-01
Lead originating from coal burning, gasoline burning, and ore smelting was identified in 210Pb-dated profiles through eight peat bogs distributed over an area of 60,000 km2. The Sphagnum-dominated bogs were located mainly in mountainous regions of the Czech Republic bordering with Germany, Austria, and Poland. Basal peat 14C-dated at 11,000 years BP had a relatively high 206Pb/207Pb ratio (1.193). Peat deposited around 1800 AD had a lower 206Pb/207Pb ratio of 1.168-1.178, indicating that environmental lead in Central Europe had been largely affected by human activity (smelting) even before the beginning of the Industrial Revolution. Five of the sites exhibited a nearly constant 206Pb/207Pb ratio (1.175) throughout the 19th century, resembling the "anthropogenic baseline" described in Northern Europe (1.17). At all sites, the 206Pb/207Pb ratio of peat decreased at least until 1980; at four sites, a reversal to more radiogenic values (higher 206Pb/207Pb), typical of easing pollution, was observed in the following decade (1980-1990). A time series of annual outputs for 14 different mining districts dispersing lead into the environment has been constructed for the past 200 years. The production of Ag-Pb, coal, and leaded gasoline peaked in 1900, 1980, and 1980, respectively. In contrast to other European countries, no peak in annual Pb accumulation rates was found in 1900, the year of maximum ore smelting. The highest annual Pb accumulation rates in peat were consistent with the highest Pb emission rates from coal-fired power plants and traffic (1980). Although maximum coal and gasoline production coincided in time, their isotope ratios were unique. The mean measured 206Pb/207Pb ratios of local coal, ores, and gasoline were 1.19, 1.16, and 1.11, respectively. A considerable proportion of coal emissions, relative to gasoline emisions, was responsible for the higher 206Pb/207Pb ratios in the recent atmosphere (1.15) compared to Western Europe (1.10). As in West European
Spruce, Joseph P.; Hargrove, William; Gasser, Gerald; Smoot, James; Kuper, Philip D.
2012-01-01
This presentation reviews the development, integration, and testing of Near Real Time (NRT) MODIS forest % maximum NDVI change products resident to the USDA Forest Service (USFS) ForWarn System. ForWarn is an Early Warning System (EWS) tool for detection and tracking of regionally evident forest change, which includes the U.S. Forest Change Assessment Viewer (FCAV) (a publically available on-line geospatial data viewer for visualizing and assessing the context of this apparent forest change). NASA Stennis Space Center (SSC) is working collaboratively with the USFS, ORNL, and USGS to contribute MODIS forest change products to ForWarn. These change products compare current NDVI derived from expedited eMODIS data, to historical NDVI products derived from MODIS MOD13 data. A new suite of forest change products are computed every 8 days and posted to the ForWarn system; this includes three different forest change products computed using three different historical baselines: 1) previous year; 2) previous three years; and 3) all previous years in the MODIS record going back to 2000. The change product inputs are maximum value NDVI that are composited across a 24 day interval and refreshed every 8 days so that resulting images for the conterminous U.S. are predominantly cloud-free yet still retain temporally relevant fresh information on changes in forest canopy greenness. These forest change products are computed at the native nominal resolution of the input reflectance bands at 231.66 meters, which equates to approx 5.4 hectares or 13.3 acres per pixel. The Time Series Product Tool, a MATLAB-based software package developed at NASA SSC, is used to temporally process, fuse, reduce noise, interpolate data voids, and re-aggregate the historical NDVI into 24 day composites, and then custom MATLAB scripts are used to temporally process the eMODIS NDVIs so that they are in synch with the historical NDVI products. Prior to posting, an in-house snow mask classification product
Time-lapse imaging of neural development: zebrafish lead the way into the fourth dimension.
Rieger, Sandra; Wang, Fang; Sagasti, Alvaro
2011-07-01
Time-lapse imaging is often the only way to appreciate fully the many dynamic cell movements critical to neural development. Zebrafish possess many advantages that make them the best vertebrate model organism for live imaging of dynamic development events. This review will discuss technical considerations of time-lapse imaging experiments in zebrafish, describe selected examples of imaging studies in zebrafish that revealed new features or principles of neural development, and consider the promise and challenges of future time-lapse studies of neural development in zebrafish embryos and adults.
Voelkl, Bernhard; Portugal, Steven J; Unsöld, Markus; Usherwood, James R; Wilson, Alan M; Fritz, Johannes
2015-02-17
One conspicuous feature of several larger bird species is their annual migration in V-shaped or echelon formation. When birds are flying in these formations, energy savings can be achieved by using the aerodynamic up-wash produced by the preceding bird. As the leading bird in a formation cannot profit from this up-wash, a social dilemma arises around the question of who is going to fly in front? To investigate how this dilemma is solved, we studied the flight behavior of a flock of juvenile Northern bald ibis (Geronticus eremita) during a human-guided autumn migration. We could show that the amount of time a bird is leading a formation is strongly correlated with the time it can itself profit from flying in the wake of another bird. On the dyadic level, birds match the time they spend in the wake of each other by frequent pairwise switches of the leading position. Taken together, these results suggest that bald ibis cooperate by directly taking turns in leading a formation. On the proximate level, we propose that it is mainly the high number of iterations and the immediacy of reciprocation opportunities that favor direct reciprocation. Finally, we found evidence that the animals' propensity to reciprocate in leading has a substantial influence on the size and cohesion of the flight formations.
Droste, Stephanie; Governale, Michele
2016-04-01
We study the finite-time full counting statistics for subgap transport through a single-level quantum dot tunnel-coupled to one normal and one superconducting lead. In particular, we determine the factorial and the ordinary cumulants both for finite times and in the long-time limit. We find that the factorial cumulants violate the sign criterion, indicating a non-binomial distribution, even in absence of Coulomb repulsion due to the presence of superconducting correlations. At short times the cumulants exhibit oscillations which are a signature of the coherent transfer of Cooper pairs between the dot and the superconductor.
Low flow forecasting with a lead time of 14 days for navigation and energy supply in the Rhine River
Demirel, M.C.; Booij, Martijn J.
2011-01-01
Low flow forecasting, days or even months in advance, is particularly important to the efficient operation of power plants and freight shipment. This study presents a low flow forecasting model with a lead time of 14 days for the Rhine River. The forecasts inherit uncertainty sources mainly because
Kong, D. F.; Qu, Z. N. [Yunnan Observatories, Chinese Academy of Sciences, Kunming 650011 (China); Guo, Q. L., E-mail: kdf@ynao.ac.cn [College of Mathematics Physics and Information Engineering, Jiaxing University, Jiaxing 314001 (China)
2014-05-01
Cross-correlation analysis and wavelet transform methods are used to investigate whether high-latitude solar activity leads low-latitude solar activity in time phase or not, using the data of the Carte Synoptique solar filaments archive from 1919 March to 1989 December. From the cross-correlation analysis, high-latitude solar filaments have a time lead of 12 Carrington solar rotations with respect to low-latitude ones. Both the cross-wavelet transform and wavelet coherence indicate that high-latitude solar filaments lead low-latitude ones in time phase. Furthermore, low-latitude solar activity is better correlated with high-latitude solar activity of the previous cycle than with that of the following cycle, which is statistically significant. Thus, the present study confirms that high-latitude solar activity in the polar regions is indeed better correlated with the low-latitude solar activity of the following cycle than with that of the previous cycle, namely, leading in time phase.
Observing expertise-related actions leads to perfect time flow estimations.
Yin-Hua Chen
Full Text Available The estimation of the time of exposure of a picture portraying an action increases as a function of the amount of movement implied in the action represented. This effect suggests that the perceiver creates an internal embodiment of the action observed as if internally simulating the entire movement sequence. Little is known however about the timing accuracy of these internal action simulations, specifically whether they are affected by the level of familiarity and experience that the observer has of the action. In this study we asked professional pianists to reproduce different durations of exposure (shorter or longer than one second of visual displays both specific (a hand in piano-playing action and non-specific to their domain of expertise (a hand in finger-thumb opposition and scrambled-pixels and compared their performance with non-pianists. Pianists outperformed non-pianists independently of the time of exposure of the stimuli; remarkably the group difference was particularly magnified by the pianists' enhanced accuracy and stability only when observing the hand in the act of playing the piano. These results for the first time provide evidence that through musical training, pianists create a selective and self-determined dynamic internal representation of an observed movement that allows them to estimate precisely its temporal duration.
Optimal provisioning strategies for slow moving spare parts with small lead times
Teunter, R.H.; Klein Haneveld, W.K.
1997-01-01
When an expensive piece of equipment is bought, spare parts can often be bought at a reduced price. A decision must be made about the initial provisioning of spare parts. Furthermore, if at a certain time the stock drops to zero, because a number of failures have occurred, a decision must be made ab
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Fan, Gong-duan; Chen, Li-ru; Lin, Ru-jing; Lin, Qian; Su, Zhao-yue; Lin, Xiu-yong
2016-02-15
Titanate nanomaterials (TNs) were synthesized via a simple hydrothermal method using TiO2 (ST-01) and NaOH as the raw materials, and presented different morphologies by adjusting the reaction time. The physico-chemical properties of the as-prepared TNs, such as morphology, structure, surface area, and chemical composition were characterized by XRD, SEM and BET. The adsorption capability and rules of Pb(II) in aqueous solutions were tested in the static system. The results showed that the TNs prepared with 12-72 h reaction time were pure monoclinic phase titanate and their specific surface areas were in the range from 243.05 m2 x g(-1) to 286.20 m2 x g(-1). TNs with reaction time between 12-36 h mainly showed sheet structure, and those with reaction time higher than 48 h showed linear structure. The adsorption capacity of Pb(II) by TNs-12, TNs-24, TNs-36, TNs-48, TNs-60 and TNs-72 was 479.40, 504.12, 482.00, 388.10, 364.60 and 399.00 mg x g(-1), respectively. The sheet TNs had a better adsorption capacity than the linear TNs. TNs-24 had the highest adsorbing capacity. The adsorption kinetics of Pb(II) by TNs-24 followed the pseudo-second-order model, and the equilibrium data was best fitted with the Langmuir isotherm model. The equilibrium adsorption time of TNs-24 was 120 min, and the adsorption was an exothermic process, with a high adsorption capacity at low temperature or room temperature; the optimal adsorption pH was 5.0. When pH was 1.0, the desorption rate of TNs-24 could reach 99.00%, and the removal efficiency of Pb(II) by regenerated TNs was still more than 97% after six times of usage. Therefore, TNs could efficiently remove Pb(II) in aqueous solutions, and the optimal reaction time should be controlled to 12-24 h. When Cd(II) or Ni(II) existed in the solution, the equilibrium adsorption capacity and removal rate of TNs-24 were decreased. The adsorption mechanism was mainly ion-exchanged between Pb(II) and H+/Na+ in TNs.
Pretegiani, Elena; Astefanoaei, Corina; Daye, Pierre M; FitzGibbon, Edmond J; Creanga, Dorina-Emilia; Rufa, Alessandra; Optican, Lance M
2015-01-28
We move our eyes to explore the world, but visual areas determining where to look next (action) are different from those determining what we are seeing (perception). Whether, or how, action and perception are temporally coordinated is not known. The preparation time course of an action (e.g., a saccade) has been widely studied with the gap/overlap paradigm with temporal asynchronies (TA) between peripheral target onset and fixation point offset (gap, synchronous, or overlap). However, whether the subjects perceive the gap or overlap, and when they perceive it, has not been studied. We adapted the gap/overlap paradigm to study the temporal coupling of action and perception. Human subjects made saccades to targets with different TAs with respect to fixation point offset and reported whether they perceived the stimuli as separated by a gap or overlapped in time. Both saccadic and perceptual report reaction times changed in the same way as a function of TA. The TA dependencies of the time change for action and perception were very similar, suggesting a common neural substrate. Unexpectedly, in the perceptual task, subjects misperceived lights overlapping by less than ∼100 ms as separated in time (overlap seen as gap). We present an attention-perception model with a map of prominence in the superior colliculus that modulates the stimulus signal's effectiveness in the action and perception pathways. This common source of modulation determines how competition between stimuli is resolved, causes the TA dependence of action and perception to be the same, and causes the misperception.
... including some imported jewelry. What are the health effects of lead? • More commonly, lower levels of lead in children over time may lead to reduced IQ, slow learning, Attention Deficit Hyperactivity Disorder (ADHD), or behavioral issues. • Lead also affects other ...
Detailed Maintenance Planning for Military Systems with Random Lead Times and Cannibalization
2014-12-01
with IID-featured PE failures are studied using Queuing Theory [3, 4, 10]. However, this method cannot be used for detailed operational decision making...constant failure rates and therefore they were able to apply queue theory on maintenance and reliability study [3–7]. In the CAF reality, however, the...in a time period depends on the number of functioning PEs at the beginning of the period. Using the theory of Markov decision process (MDP), Zhang [8
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Sleep restriction may lead to disruption in physiological attention and reaction time
Arbind Kumar Choudhary
2016-07-01
Full Text Available Sleepiness is the condition where for some reason fails to go into a sleep state and will have difficulty in remaining awake even while carrying out activities. Sleep restriction occurs when an individual fails to get enough sleep due to high work demands. The mechanism between sleep restriction and underlying brain physiology deficits is not well assumed. The objective of the present study was to investigate the mental attention (P300 and reaction time [visual (VRT and auditory (ART] among night watchmen, at subsequent; first (1st day, fourth (4th day and seventh (7th day of restricted sleep period. After exclusion and inclusion criteria, the study was performed among 50 watchmen (age=18–35 years (n=50 after providing written informed consent and divided into two group. Group I-(Normal sleep (n=28 working in day time and used to have normal sleep in night (≥8 h; Group II-(Restricted sleep (n=22 - working in night time and used to have less sleep in night (≤3 h. Statistical significance between the different groups was determined by the independent student ʻtʼ test and the significance level was fixed at p≤0.05. We observed that among all normal and restricted sleep watchmen there was not any significant variation in Karolinska Sleepiness Scale (KSS score, VRT and ART, along with latency and amplitude of P300 on 1st day of restricted sleep. However at subsequent on 4th day and 7th day of restricted sleep, there was significant increase in (KSSscore, and prolongation of VRT and ART as well as alteration in latency and amplitude of P300 wave in restricted sleep watchmen when compare to normal sleep watchmen. The present finding concludes that loss of sleep has major impact in dynamic change in mental attention and reaction time among watchmen employed in night shift. Professional regulations and work schedules should integrate sleep schedules before and during the work period as an essential dimension for their healthy life.
Sleep restriction may lead to disruption in physiological attention and reaction time.
Choudhary, Arbind Kumar; Kishanrao, Sadawarte Sahebrao; Dadarao Dhanvijay, Anup Kumar; Alam, Tanwir
2016-01-01
Sleepiness is the condition where for some reason fails to go into a sleep state and will have difficulty in remaining awake even while carrying out activities. Sleep restriction occurs when an individual fails to get enough sleep due to high work demands. The mechanism between sleep restriction and underlying brain physiology deficits is not well assumed. The objective of the present study was to investigate the mental attention (P300) and reaction time [visual (VRT) and auditory (ART)] among night watchmen, at subsequent; first (1st) day, fourth (4th) day and seventh (7th) day of restricted sleep period. After exclusion and inclusion criteria, the study was performed among 50 watchmen (age=18-35 years) (n=50) after providing written informed consent and divided into two group. Group I-(Normal sleep) (n=28) working in day time and used to have normal sleep in night (≥8 h); Group II-(Restricted sleep) (n=22) - working in night time and used to have less sleep in night (≤3 h). Statistical significance between the different groups was determined by the independent student 't' test and the significance level was fixed at p≤0.05. We observed that among all normal and restricted sleep watchmen there was not any significant variation in Karolinska Sleepiness Scale (KSS) score, VRT and ART, along with latency and amplitude of P300 on 1st day of restricted sleep. However at subsequent on 4th day and 7th day of restricted sleep, there was significant increase in (KSS)score, and prolongation of VRT and ART as well as alteration in latency and amplitude of P300 wave in restricted sleep watchmen when compare to normal sleep watchmen. The present finding concludes that loss of sleep has major impact in dynamic change in mental attention and reaction time among watchmen employed in night shift. Professional regulations and work schedules should integrate sleep schedules before and during the work period as an essential dimension for their healthy life.
Zunino, Luciano, E-mail: lucianoz@ciop.unlp.edu.ar [Centro de Investigaciones Ópticas (CONICET La Plata – CIC), C.C. 3, 1897 Gonnet (Argentina); Departamento de Ciencias Básicas, Facultad de Ingeniería, Universidad Nacional de La Plata (UNLP), 1900 La Plata (Argentina); Olivares, Felipe, E-mail: olivaresfe@gmail.com [Instituto de Física, Pontificia Universidad Católica de Valparaíso (PUCV), 23-40025 Valparaíso (Chile); Scholkmann, Felix, E-mail: Felix.Scholkmann@gmail.com [Research Office for Complex Physical and Biological Systems (ROCoS), Mutschellenstr. 179, 8038 Zurich (Switzerland); Biomedical Optics Research Laboratory, Department of Neonatology, University Hospital Zurich, University of Zurich, 8091 Zurich (Switzerland); Rosso, Osvaldo A., E-mail: oarosso@gmail.com [Instituto de Física, Universidade Federal de Alagoas (UFAL), BR 104 Norte km 97, 57072-970, Maceió, Alagoas (Brazil); Instituto Tecnológico de Buenos Aires (ITBA) and CONICET, C1106ACD, Av. Eduardo Madero 399, Ciudad Autónoma de Buenos Aires (Argentina); Complex Systems Group, Facultad de Ingeniería y Ciencias Aplicadas, Universidad de los Andes, Av. Mons. Álvaro del Portillo 12.455, Las Condes, Santiago (Chile)
2017-06-15
A symbolic encoding scheme, based on the ordinal relation between the amplitude of neighboring values of a given data sequence, should be implemented before estimating the permutation entropy. Consequently, equalities in the analyzed signal, i.e. repeated equal values, deserve special attention and treatment. In this work, we carefully study the effect that the presence of equalities has on permutation entropy estimated values when these ties are symbolized, as it is commonly done, according to their order of appearance. On the one hand, the analysis of computer-generated time series is initially developed to understand the incidence of repeated values on permutation entropy estimations in controlled scenarios. The presence of temporal correlations is erroneously concluded when true pseudorandom time series with low amplitude resolutions are considered. On the other hand, the analysis of real-world data is included to illustrate how the presence of a significant number of equal values can give rise to false conclusions regarding the underlying temporal structures in practical contexts. - Highlights: • Impact of repeated values in a signal when estimating permutation entropy is studied. • Numerical and experimental tests are included for characterizing this limitation. • Non-negligible temporal correlations can be spuriously concluded by repeated values. • Data digitized with low amplitude resolutions could be especially affected. • Analysis with shuffled realizations can help to overcome this limitation.
Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY11 Status Report
Kulisek, Jonathan A.; Anderson, Kevin K.; Bowyer, Sonya M.; Casella, Andrew M.; Gesh, Christopher J.; Warren, Glen A.
2011-09-30
Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of today's confirmatory assay methods. This document is a progress report for FY2011 PNNL analysis and algorithm development. Progress made by PNNL in FY2011 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model, which accounts for self-shielding effects using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the true self-shielding functions of the used fuel assembly models. The potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space was demonstrated. Also, in FY2011, PNNL continued to develop an analytical model. Such efforts included the addition of six more non-fissile absorbers in the analytical shielding function and the non-uniformity of the neutron flux across the LSDS assay chamber. A hybrid analytical-empirical approach was developed to determine the mass of total Pu (sum of the masses of 239Pu, 240Pu, and 241Pu), which is an important quantity in safeguards. Results using this hybrid method were of approximately the same accuracy as the
Boyle, Edward A.; Bergquist, Bridget A.; Kayser, Richard A.; Mahowald, Natalie
2005-02-01
water maxima, but there are some significant quantitative differences from other reported profiles. The ≤0.4 μm Mn concentration is highest near the surface, decreases sharply in the upper 500 m, then shows an intermediate water maximum at 800 m and then decreases in the deepest waters; these concentrations are higher than observed at a station 350 miles to the northeast that shows similar vertical variations. It appears that there is a significant Mn gradient (throughout the water column) from HOT towards the northeast. Compared to the first valid oceanic Pb data for samples collected in 1976, Pb at ALOHA in 1997-1999 shows decreases in surface waters and waters shallower than 200 m. Pb concentrations in central North Pacific surface waters have decreased by a factor of 2 during the past 25 yr (from ˜65 to ˜30 pmol kg -1); surface water Pb concentrations in the central North Atlantic and central North Pacific are now comparable. We attribute the surface water Pb decrease to the elimination of leaded gasoline in Japan and to some extent by the U.S. and Canada. We attribute most of the remaining Pb in Pacific surface waters to Asian emissions, more likely due to high-temperature industrial activities such as coal burning rather than to leaded gasoline consumption. A 3-year mixed-layer time series from the nearby HALE-ALOHA mooring site (1997-1999) shows that there is an annual cycle in Pb with concentrations ˜20% higher in winter months; this rise may be created by downward mixing of the winter mixed layer into the steep gradient of higher Pb in the upper thermocline (Pb concentrations double between the surface and 200 m). From 200 m to the bottom, Pb concentrations decrease to levels of 5-9 pmol kg -1 near the bottom; for most of the water column, thermocline and deepwater Pb concentrations do not appear to have changed significantly during the 23-yr interval.
Muntlin Athlin, Asa; von Thiele Schwarz, Ulrica; Farrohknia, Nasim
2013-11-01
Long waiting times for emergency care are claimed to be caused by overcrowded emergency departments and non-effective working routines. Teamwork has been suggested as a promising solution to these issues. The aim of the present study was to investigate the effects of teamwork in a Swedish emergency department on lead times and patient flow. The study was set in an emergency department of a university hospital where teamwork, a multi-professional team responsible for the whole care process for a group of patients, was introduced. The study has a longitudinal non-randomized intervention study design. Data were collected for five two-week periods during a period of 1.5 years. The first part of the data collection used an ABAB design whereby standard procedure (A) was altered weekly with teamwork (B). Then, three follow-ups were conducted. At last follow-up, teamwork was permanently implemented. The outcome measures were: number of patients handled within teamwork time, time to physician, total visit time and number of patients handled within the 4-hour target. A total of 1,838 patient visits were studied. The effect on lead times was only evident at the last follow-up. Findings showed that the number of patients handled within teamwork time was almost equal between the different study periods. At the last follow-up, the median time to physician was significantly decreased by 11 minutes (p = 0.0005) compared to the control phase and the total visit time was significantly shorter at last follow-up compared to control phase (p = Teamwork seems to contribute to the quality improvement of emergency care in terms of small but significant decreases in lead times. However, although efficient work processes such as teamwork are necessary to ensure safe patient care, it is likely not sufficient for bringing about larger decreases in lead times or for meeting the 4-hour target in the emergency department.
Bean, R; Bean, Rachel; Magueijo, Joao
2001-01-01
Quintessence scenarios provide a simple explanation for the observed acceleration of the Universe. Yet, explaining why acceleration did not start a long time ago remains a challenge. The idea that the transition from radiation to matter domination played a dynamical role in triggering acceleration has been put forward in various guises. We propose a simple dilaton-derived quintessence model in which temporary vacuum domination is naturally triggered by the radiation to matter transition. In this model Einstein's gravity is preserved but quintessence couples non-minimally to the cold dark matter, but not to ``visible'' matter. Such couplings have been attributed to the dilaton in the low-energy limit of string theory beyond tree level. We also show how a cosmological constant in the string frame translates into a quintessence-type of potential in the atomic frame.
Open source and healthcare in Europe - time to put leading edge ideas into practice.
Murray, Peter J; Wright, Graham; Karopka, Thomas; Betts, Helen; Orel, Andrej
2009-01-01
Free/Libre and Open Source Software (FLOSS) is a process of software development, a method of licensing and a philosophy. Although FLOSS plays a significant role in several market areas, the impact in the health care arena is still limited. FLOSS is promoted as one of the most effective means for overcoming fragmentation in the health care sector and providing a basis for more efficient, timely and cost effective health care provision. The 2008 European Federation for Medical Informatics (EFMI) Special Topic Conference (STC) explored a range of current and future issues related to FLOSS in healthcare (FLOSS-HC). In particular, there was a focus on health records, ubiquitous computing, knowledge sharing, and current and future applications. Discussions resulted in a list of main barriers and challenges for use of FLOSS-HC. Based on the outputs of this event, the 2004 Open Steps events and subsequent workshops at OSEHC2009 and Med-e-Tel 2009, a four-step strategy has been proposed for FLOSS-HC: 1) a FLOSS-HC inventory; 2) a FLOSS-HC collaboration platform, use case database and knowledge base; 3) a worldwide FLOSS-HC network; and 4) FLOSS-HC dissemination activities. The workshop will further refine this strategy and elaborate avenues for FLOSS-HC from scientific, business and end-user perspectives. To gain acceptance by different stakeholders in the health care industry, different activities have to be conducted in collaboration. The workshop will focus on the scientific challenges in developing methodologies and criteria to support FLOSS-HC in becoming a viable alternative to commercial and proprietary software development and deployment.
Karimi Movahed, Kamran; Zhang, Zhi-Hai
2015-09-01
Demand and lead time uncertainties have significant effects on supply chain behaviour. In this paper, we present a single-product three-level multi-period supply chain with uncertain demands and lead times by using robust techniques to study the managerial insights of the supply chain inventory system under uncertainty. We formulate this problem as a robust mixed-integer linear program with minimised expected cost and total cost variation to determine the optimal (s, S) values of the inventory parameters. Several numerical studies are performed to investigate the supply chain behaviour. Useful guidelines for the design of a robust supply chain are also provided. Results show that the order variance and the expected cost in a supply chain significantly increase when the manufacturer's review period is an integer ratio of the distributor's and the retailer's review periods.
Lin Hsien-Jen
2013-01-01
Full Text Available In this paper, we consider an integrated vendor-buyer inventory policy for a continuous review model with a random number of defective items and screening process gradually at a fixed screening rate in buyer’s arriving order lot. We assume that shortages are allowed and partially backlogged on the buyer’s side, and that the lead time demand distribution is unknown, except its first two moments. The objective is to apply the minmax distribution free approach to determine the optimal order quantity, reorder point, lead time and the number of lots delivered in one production run simultaneously so that the expected total system cost is minimized. Numerical experiments along with sensitivity analysis were performed to illustrate the effects of parameters on the decision and the total system cost.
Vijayashree, M.; Uthayakumar, R.
2017-03-01
Lead time is one of the major limits that affect planning at every stage of the supply chain system. In this paper, we study a continuous review inventory model. This paper investigates the ordering cost reductions are dependent on lead time. This study addressed two-echelon supply chain problem consisting of a single vendor and a single buyer. The main contribution of this study is that the integrated total cost of the single vendor and the single buyer integrated system is analyzed by adopting two different (linear and logarithmic) types ordering cost reductions act dependent on lead time. In both cases, we develop effective solution procedures for finding the optimal solution and then illustrative numerical examples are given to illustrate the results. The solution procedure is to determine the optimal solutions of order quantity, ordering cost, lead time and the number of deliveries from the single vendor and the single buyer in one production run, so that the integrated total cost incurred has the minimum value. Ordering cost reduction is the main aspect of the proposed model. A numerical example is given to validate the model. Numerical example solved by using Matlab software. The mathematical model is solved analytically by minimizing the integrated total cost. Furthermore, the sensitivity analysis is included and the numerical examples are given to illustrate the results. The results obtained in this paper are illustrated with the help of numerical examples. The sensitivity of the proposed model has been checked with respect to the various major parameters of the system. Results reveal that the proposed integrated inventory model is more applicable for the supply chain manufacturing system. For each case, an algorithm procedure of finding the optimal solution is developed. Finally, the graphical representation is presented to illustrate the proposed model and also include the computer flowchart in each model.
Siwon Song
2012-09-01
Full Text Available The climatology of mean bias errors (relative to 1-day forecasts was examined in a 20-year hindcast set from version 1 of the Climate Forecast System (CFS, for forecast lead times of 2, 4, 8, 16, ... 256 days, verifying in different seasons. Results mostly confirm the simple expectation that atmospheric model biases should be evident at short lead (2–4 days, while soil moisture errors develop over days-weeks and ocean errors emerge over months. A further simplification is also evident: surface temperature bias patterns have nearly fixed geographical structure, growing with different time scales over land and ocean. The geographical pattern has mostly warm and dry biases over land and cool bias over the oceans, with two main exceptions: (1 deficient stratocumulus clouds cause warm biases in eastern subtropical oceans, and (2 high latitude land is too cold in boreal winter. Further study of the east Pacific cold tongue-Intertropical Convergence Zone (ITCZ complex shows a possible interaction between a rapidly-expressed atmospheric model bias (poleward shift of deep convection beginning at day 2 and slow ocean dynamics (erroneously cold upwelling along the equator in leads > 1 month. Further study of the high latitude land cold bias shows that it is a thermal wind balance aspect of the deep polar vortex, not just a near-surface temperature error under the wintertime inversion, suggesting that its development time scale of weeks to months may involve long timescale processes in the atmosphere, not necessarily in the land model. Winter zonal wind errors are small in magnitude, but a refractive index map shows that this can cause modest errors in Rossby wave ducting. Finally, as a counterpoint to our initial expectations about error growth, a case of non-monotonic error growth is shown: velocity potential bias grows with lead on a time scale of weeks, then decays over months. It is hypothesized that compensations between land and ocean errors may
Tempo máximo de fonação de crianças pré-escolares Maximum phonation time in pre-school children
Carla Aparecida Cielo
2008-08-01
Full Text Available Pesquisas sobre o tempo máximo de fonação (TMF em crianças obtiveram diferentes resultados, constatando que tal medida pode refletir o controle neuromuscular e aerodinâmico da produção vocal, podendo ser utilizada como indicador para outras formas de avaliação, tanto qualitativas quanto objetivas. OBJETIVO: Verificar as medidas de TMF de 23 crianças pré-escolares, com idades entre quatro e seis anos e oito meses. MATERIAL E MÉTODO: O processo de amostragem contou com questionário enviado aos pais, triagem auditiva e avaliação perceptivo-auditiva vocal, por meio da escala RASAT. A coleta de dados constou dos TMF. DESENHO DO ESTUDO: Prospectivo de corte transversal. RESULTADOS: Os TMF /a/, /s/ e /z/ médios foram 7,42s, 6,35s e 7,19s; os TMF /a/ aos seis anos, foram significativamente maiores do que aos quatro anos; à medida que a idade aumentou, todos os TMF também aumentaram; e a relação s/z para todas as idades foi próxima de um. CONCLUSÕES: Os valores de TMF mostraram-se superiores aos verificados em pesquisas nacionais e inferiores aos verificados em pesquisa internacionais. Além disso, pode-se concluir que as faixas etárias analisadas no presente estudo encontram-se num período de maturação nervosa e muscular, sendo a imaturidade mais evidente na faixa etária dos quatro anos.Past studies on the maximum phonation time (MPT in children have shown different results in duration. This factor may reflect the neuromuscular and aerodynamic control of phonation in patients; such control might be used as an indicator of other evaluation methods on a qualitative and quantitative basis. AIM: to verify measures of MPT and voice acoustic characteristics in 23 children aged four to six year and eight months. METHOD: The sampling process comprised a questionnaire that was sent to parents, followed by auditory screening and a voice perceptive-auditory assessment based on the R.A.S.A.T. scale. Data collection included the MPT. STUDY
Zhao, Tongtiegang; Liu, Pan; Zhang, Yongyong; Ruan, Chengqing
2017-09-01
Global climate model (GCM) forecasts are an integral part of long-range hydroclimatic forecasting. We propose to use clustering to explore anomaly correlation, which indicates the performance of raw GCM forecasts, in the three-dimensional space of latitude, longitude, and initialization time. Focusing on a certain period of the year, correlations for forecasts initialized at different preceding periods form a vector. The vectors of anomaly correlation across different GCM grid cells are clustered to reveal how GCM forecasts perform as time progresses. Through the case study of Climate Forecast System Version 2 (CFSv2) forecasts of summer precipitation in China, we observe that the correlation at a certain cell oscillates with lead time and can become negative. The use of clustering reveals two meaningful patterns that characterize the relationship between anomaly correlation and lead time. For some grid cells in Central and Southwest China, CFSv2 forecasts exhibit positive correlations with observations and they tend to improve as time progresses. This result suggests that CFSv2 forecasts tend to capture the summer precipitation induced by the East Asian monsoon and the South Asian monsoon. It also indicates that CFSv2 forecasts can potentially be applied to improving hydrological forecasts in these regions. For some other cells, the correlations are generally close to zero at different lead times. This outcome implies that CFSv2 forecasts still have plenty of room for further improvement. The robustness of the patterns has been tested using both hierarchical clustering and k-means clustering and examined with the Silhouette score.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Hardik Soni
2011-10-01
Full Text Available In today's global marketplace, individual firms do not compete as independent entities rather as an integral part of a supply chain. Uncertainty is the main attribute in managing the supply chains. Accordingly, we develop a (Q, R inventory model with service level constraint and variable lead-time in fuzzy-stochastic environment. In addition, the triangular fuzzy numbers counts upon lead-time are used to construct fuzzy-stochastic lead-time demand. Using credibility criterion, the expected shortages are calculated. Without loss of generality, we assume that all the observed values of the fuzzy random variable, representing the demand are triangular fuzzy numbers. Consequently, the value of total expected cost in the fuzzy sense is derived using the expected value criterion or credibility criterion. To determine an optimal policy, a numerical technique is presented and the results are analyzed using scan and zoom for constraint optimization. Finally, in order to demonstrate the accuracy and effectiveness of the proposed model, numerical example and sensitivity analysis are also included.
Ford, Corey C. (University of New Mexico, Albuquerque, NM); Taylor, Paul Allen
2008-02-01
The objective of this modeling and simulation study was to establish the role of stress wave interactions in the genesis of traumatic brain injury (TBI) from exposure to explosive blast. A high resolution (1 mm{sup 3} voxels), 5 material model of the human head was created by segmentation of color cryosections from the Visible Human Female dataset. Tissue material properties were assigned from literature values. The model was inserted into the shock physics wave code, CTH, and subjected to a simulated blast wave of 1.3 MPa (13 bars) peak pressure from anterior, posterior and lateral directions. Three dimensional plots of maximum pressure, volumetric tension, and deviatoric (shear) stress demonstrated significant differences related to the incident blast geometry. In particular, the calculations revealed focal brain regions of elevated pressure and deviatoric (shear) stress within the first 2 milliseconds of blast exposure. Calculated maximum levels of 15 KPa deviatoric, 3.3 MPa pressure, and 0.8 MPa volumetric tension were observed before the onset of significant head accelerations. Over a 2 msec time course, the head model moved only 1 mm in response to the blast loading. Doubling the blast strength changed the resulting intracranial stress magnitudes but not their distribution. We conclude that stress localization, due to early time wave interactions, may contribute to the development of multifocal axonal injury underlying TBI. We propose that a contribution to traumatic brain injury from blast exposure, and most likely blunt impact, can occur on a time scale shorter than previous model predictions and before the onset of linear or rotational accelerations traditionally associated with the development of TBI.
McDonald, Catherine C; Seacrist, Thomas S; Lee, Yi-Ching; Loeb, Helen; Kandadai, Venk; Winston, Flaura K
2013-01-01
Driving simulators can be used to evaluate driving performance under controlled, safe conditions. Teen drivers are at particular risk for motor vehicle crashes and simulated driving can provide important information on performance. We developed a new simulator protocol, the Simulated Driving Assessment (SDA), with the goal of providing a new tool for driver assessment and a common outcome measure for evaluation of training programs. As an initial effort to examine the validity of the SDA to differentiate performance according to experience, this analysis compared driving behaviors and crashes between novice teens (n=20) and experienced adults (n=17) on a high fidelity simulator for one common crash scenario, a rear-end crash. We examined headway time and crashes during a lead truck with sudden braking event in our SDA. We found that 35% of the novice teens crashed and none of the experienced adults crashed in this lead truck braking event; 50% of the teens versus 25% of the adults had a headway time headway time, 70% crashed. Among all participants with a headway time of 2-3 seconds, further investigation revealed descriptive differences in throttle position and brake pedal force when comparing teens who crashed, teens who did not crash and adults (none of whom crashed). Even with a relatively small sample, we found statistically significant differences in headway time for adults and teens, providing preliminary construct validation for our new SDA.
Effect of Kiwi Shell and Incubation Time on Mobility of Lead and Cadmium in Contaminated Clay Soil
Bahareh Lorestani
2014-06-01
Full Text Available In this study, the effectiveness of kiwi shell was investigated to reduce the mobility of Lead and Cadmium in clay soil in different intervals. For this purpose a clay soil sample was contaminated with Lead and Cadmium in distinct dishes with 10 and 600 ppm concentrations respectively and mixed with 5% kiwi shell. Samples were placed in incubator, and then sampling of soil in incubator was performed in intervals 3 hours, 1, 3, 7, 14, 21 and 28 days. Heavy metals concentrations were determined in different fractions of soil including exchangeable, carbonate, Fe-Mn oxides, organic matter, and residual with sequential extraction procedure and atomic absorption spectrophotometry. The results showed that during incubation, Lead concentration in treatments with kiwi shell rather than control soil increased in carbonate from 19.48 to 26.18 and in organic matter from 9.06 to 18.66 percent. Exchangeable, Fe-Mn oxides and residual fractions decreased from 11.48 to 6.69, 45.72 to 39.83 and 14.21 to 7.90 percent respectively. In samples with absorbent compared with control soil, Cadmium concentration in carbonate and organic matter increased from 28.20 to 38.40 and 18.76 to 24.72, while in exchangeable, Fe-Mn oxides and residual decreased from 16.66 to 13.69, 37.25 to 19.65 and 6.24 to 3.61 percent respectively. This study revealed that kiwi shell function in decreasing Cadmium and Lead mobility in studied clay soil were increased with increasing incubation time, but Cadmium compared with Lead required additional time to transfer and mobility to constant and stable soil fractions such as, organic matter and Fe-Mn oxides.
Al-Araidah, Omar; Momani, Amer; Khasawneh, Mohammad; Momani, Mohammed
2010-01-01
The healthcare arena, much like the manufacturing industry, benefits from many aspects of the Toyota lean principles. Lean thinking contributes to reducing or eliminating nonvalue-added time, money, and energy in healthcare. In this paper, we apply selected principles of lean management aiming at reducing the wasted time associated with drug dispensing at an inpatient pharmacy at a local hospital. Thorough investigation of the drug dispensing process revealed unnecessary complexities that contribute to delays in delivering medications to patients. We utilize DMAIC (Define, Measure, Analyze, Improve, Control) and 5S (Sort, Set-in-order, Shine, Standardize, Sustain) principles to identify and reduce wastes that contribute to increasing the lead-time in healthcare operations at the pharmacy understudy. The results obtained from the study revealed potential savings of > 45% in the drug dispensing cycle time.
Schlegel, Todd T.; Kulecz, Walter B.; DePalma, Jude L.; Feiveson, Alan H.; Wilson, John S.; Rahman, M. Atiar; Bungo, Michael W.
2004-01-01
Several studies have shown that diminution of the high-frequency (HF; 150-250 Hz) components present within the central portion of the QRS complex of an electrocardiogram (ECG) is a more sensitive indicator for the presence of myocardial ischemia than are changes in the ST segments of the conventional low-frequency ECG. However, until now, no device has been capable of displaying, in real time on a beat-to-beat basis, changes in these HF QRS ECG components in a continuously monitored patient. Although several software programs have been designed to acquire the HF components over the entire QRS interval, such programs have involved laborious off-line calculations and postprocessing, limiting their clinical utility. We describe a personal computer-based ECG software program developed recently at the National Aeronautics and Space Administration (NASA) that acquires, analyzes, and displays HF QRS components in each of the 12 conventional ECG leads in real time. The system also updates these signals and their related derived parameters in real time on a beat-to-beat basis for any chosen monitoring period and simultaneously displays the diagnostic information from the conventional (low-frequency) 12-lead ECG. The real-time NASA HF QRS ECG software is being evaluated currently in multiple clinical settings in North America. We describe its potential usefulness in the diagnosis of myocardial ischemia and coronary artery disease.
Agustina Hidayat, Yosi; Ria Kasanah, Aprilia; Yudhistira, Titah
2016-02-01
PT. Dirgantara Indonesia, one of State Owned Enterprises engaging in the aerospace industry, targets to control 30% of world market for light and medium sized aircraft. One type of the aircrafts produced by PT. DI every year is CN-235. Currently, the cost of material procurement reaches 50% of the total cost of production. Material has a variety of characteristics, one of which is having a lifetime. The demand characteristic of the material with expiration for the CN-235 aircraft is deterministic. PT DI does not have any scientific background for its procurement of raw material policy. In addition, there are two methods of transportation used for delivering materials, i.e. by land and air. Each method has different lead time. Inventory policies used in this research are deterministic and probabilistic. Both deterministic and probabilistic single and multi-item inventory policies have order quantity, time to order, reorder point, and lead time as decision variables. The performance indicator for this research is total inventory cost. Inventory policy using the single item EOQ and considering expiration factor inventory results in a reduction in total costs up to 69.58% and multi item results in a decrease in total costs amounted to 71.16%. Inventory policy proposal using the model of a single item by considering expiration factor and lead time crashing cost results in a decrease in total costs amounted to 71.5% and multi item results in a decrease in total costs amounted to 71.62%. Subsequently, wasted expired materials, with the proposed models have been successfully decreased to 95%.
E. Dologlou
2011-06-01
Full Text Available The seismicity of the last 15 years in the Aegean Sea revealed that earthquakes (M_{w} > 5 with epicentres falling within the Sporades basin and the confined area north of Samos island were preceded by electric seismic signals (SES with a remarkably long lead time. A possible explanation of this behaviour by means of specific tectonics and geodynamics which characterise these two regions, such as a significant small crustal thickness and a high heat flow rate, has been attempted. New data seem to strengthen the above hypothesis.
Swierczynska, Malgorzata; Mizinski, Bartlomiej; Niedzielski, Tomasz
2015-04-01
There exist several systems which produce sea level forecasts in real time, with lead times ranging from hours to two weeks in the future. One of the recently developed solutions is Prognocean, the system that has been built and implemented at the University of Wroclaw, Poland. Its main feature is that it uses simple time series models to predict sea level anomaly maps, and does it for lead times ranging from 1 to 14 days with daily update. The empirical data-based models are fitted in real time both to individual grids (polynomial-harmonic model, polynomial-harmonic model combined with autoregressive model, polynomial-harmonic model combined with threshold autoregressive model) and to numerous grids forming a spatial latitude x longitude window of 3˚ x 5˚ (polynomial-harmonic model combined with multivariate autoregressive model). Although their simplicity, the approaches have already been shown to produce sea level anomaly predictions of reasonable accuracy. However, none of the analyses targeted at the comparative study which would present the skills of the Prognocean system against a background of the performance of other systems that use physically-based models. This study aims to fill this gap by comparing Prognocean-based predictions for one week into the future with the corresponding prognoses calculated by MyOcean. The reader is provided with the objectively-calculated set of statistics, presented as maps, which describes prediction errors (mean absolute error, root mean square error, index of agreement) and prediction skills (prediction efficiency, coefficient of determination) of the two systems. The exercise enables to compare the skills of the approaches, and the gridwise comparison allows one to identify areas of superior performance of each system.
Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; Smith, Emily A; Vaswani, Namrata; Petrich, Jacob W
2016-03-10
The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as "residual minimization" (RM) and "maximum likelihood" (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number of "photon counts" was approximately 20, 200, 1000, 3000, and 6000 and there were about 2-200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM methods (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson's weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. The robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.
Li, Su
2016-01-01
The multiple fundamental frequency detection problem and the source separation problem from a single-channel signal containing multiple oscillatory components and a nonstationary noise are both challenging tasks. To extract the fetal electrocardiogram (ECG) from a single-lead maternal abdominal ECG, we face both challenges. In this paper, we propose a novel method to extract the fetal ECG signal from the single channel maternal abdominal ECG signal, without any additional measurement. The algorithm is composed of three main ingredients. First, the maternal and fetal heart rates are estimated by the de-shape short time Fourier transform, which is a recently proposed nonlinear time-frequency analysis technique; second, the beat tracking technique is applied to accurately obtain the maternal and fetal R peaks; third, the maternal and fetal ECG waveforms are established by the nonlocal median. The algorithm is evaluated on a simulated fetal ECG signal database ({\\em fecgsyn} database), and tested on two real data...
Liu, Jin; Prezhdo, Oleg V
2015-11-19
Rapid development in lead halide perovskites has led to solution-processable thin film solar cells with power conversion efficiencies close to 20%. Nonradiative electron-hole recombination within perovskites has been identified as the main pathway of energy losses, competing with charge transport and limiting the efficiency. Using nonadiabatic (NA) molecular dynamics, combined with time-domain density functional theory, we show that nonradiative recombination happens faster than radiative recombination and long-range charge transfer to an acceptor material. Doping of lead iodide perovskites with chlorine atoms reduces charge recombination. On the one hand, chlorines decrease the NA coupling because they contribute little to the wave functions of the valence and conduction band edges. On the other hand, chlorines shorten coherence time because they are lighter than iodines and introduce high-frequency modes. Both factors favor longer excited-state lifetimes. The simulation shows good agreement with the available experimental data and contributes to the comprehensive understanding of electronic and vibrational dynamics in perovskites. The generated insights into design of higher-efficiency solar cells range from fundamental scientific principles, such as the role of electron-vibrational coupling and quantum coherence, to practical guidelines, such as specific suggestions for chemical doping.
Fernanda Veríssimo Soulé
2016-03-01
Full Text Available Quick Response Manufacturing (QRM is a way manufacturing companies may increase their flexibility. Manufacturing flexibility is a key to differentiation and enhancement of competitiveness. There is few empirical research on the topic of how small and medium-sized enterprises (SME may benefit from QRM, what may impact the appropriation of this approach by these important actors of our economy. This article aims to present the results of a project which applied QRM to reduce the lead time of a small company located in the state of Sao Paulo. It was proposed to a balance the throughputs of slow operations, reducing 50% of production batches; b implement cellular manufacturing and improvements in the management of Work In Process, using the POLCA system and visual management; c implement an integrated sales and operations planning (S&OP and rules for prioritization of orders. It was identified that the proposal would generate a lead time reduction from 39 to 21.3 days and a decrease of at least 51% in the raw materials stock costs. During the research, the following conclusions could be drawn: a problems in management, investment capacity and relationship with suppliers are frequent in family-owned SMEs; b QRM approach can be adapted to work within this environment; c the knowledge developed in academia can be an important tool to help family-owned SMEs to supplant these obstacles.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Fei Lin
2016-03-01
Full Text Available With its large capacity, the total urban rail transit energy consumption is very high; thus, energy saving operations are quite meaningful. The effective use of regenerative braking energy is the mainstream method for improving the efficiency of energy saving. This paper examines the optimization of train dwell time and builds a multiple train operation model for energy conservation of a power supply system. By changing the dwell time, the braking energy can be absorbed and utilized by other traction trains as efficiently as possible. The application of genetic algorithms is proposed for the optimization, based on the current schedule. Next, to validate the correctness and effectiveness of the optimization, a real case is studied. Actual data from the Beijing subway Yizhuang Line are employed to perform the simulation, and the results indicate that the optimization method of the dwell time is effective.
Hoeij, F.B. van; Stadhouders, P.H.G.M.; Weusten, B.L.A.M. [St Antonius Ziekenhuis, Department of Gastroenterology, Nieuwegein (Netherlands); Keijsers, R.G.M. [St Antonius Ziekenhuis, Department of Nuclear Medicine, Nieuwegein (Netherlands); Loffeld, B.C.A.J. [Zuwe Hofpoort Ziekenhuis, Department of Internal Medicine, Woerden (Netherlands); Dun, G. [Ziekenhuis Rivierenland, Department of Internal Medicine, Tiel (Netherlands)
2015-01-15
In patients undergoing {sup 18}F-FDG PET/CT, incidental colonic focal lesions can be indicative of inflammatory, premalignant or malignant lesions. The maximum standardized uptake value (SUV{sub max}) of these lesions, representing the FDG uptake intensity, might be helpful in differentiating malignant from benign lesions, and thereby be helpful in determining the urgency of colonoscopy. The aim of our study was to assess the incidence and underlying pathology of incidental PET-positive colonic lesions in a large cohort of patients, and to determine the usefulness of the SUV{sub max} in differentiating benign from malignant pathology. The electronic records of all patients who underwent FDG PET/CT from January 2010 to March 2013 in our hospital were retrospectively reviewed. The main indications for PET/CT were: characterization of an indeterminate mass on radiological imaging, suspicion or staging of malignancy, and suspicion of inflammation. In patients with incidental focal FDG uptake in the large bowel, data regarding subsequent colonoscopy were retrieved, if performed within 120 days. The final diagnosis was defined using colonoscopy findings, combined with additional histopathological assessment of the lesion, if applicable. Of 7,318 patients analysed, 359 (5 %) had 404 foci of unexpected colonic FDG uptake. In 242 of these 404 lesions (60 %), colonoscopy follow-up data were available. Final diagnoses were: adenocarcinoma in 25 (10 %), adenoma in 90 (37 %), and benign in 127 (53 %). The median [IQR] SUV{sub max} was significantly higher in adenocarcinoma (16.6 [12 - 20.8]) than in benign lesions (8.2 [5.9 - 10.1]; p < 0.0001), non-advanced adenoma (8.3 [6.1 - 10.5]; p < 0.0001) and advanced adenoma (9.7 [7.2 - 12.6]; p < 0.001). The receiver operating characteristic curve of SUV{sub max} for malignant versus nonmalignant lesions had an area under the curve of 0.868 (SD ± 0.038), the optimal cut-off value being 11.4 (sensitivity 80 %, specificity 82
Lafalce, Evan; Zhang, Chuang; Vardeny, Z. Valy; University of Utah Team
We studied the charge transport properties of methyl-ammonium-lead-trihalide perovskites using the photocurrent transient time-of-flight method. Various morphologies that include single-crystals and thin films with different crystalline grain sizes and surface roughness were investigated. The photocurrent transients were recorded as a function of excitation wavelength, intensity, and applied electric field as well as the sample temperature. We found that surface recombination leads to a photocurrent response that is sharply peaked at the band edge. While the carrier mobility depends on the sample preparation and sample temperature, typical values are on the order of 1cm2/Vs, consistent with previous reports using similar methods. This value is high compared to other solution-processed semiconductors such as pi-conjugated polymers and quantum dots; however it is relatively low compared to inorganic semiconductors. Therefore determining the mobility limiting factors in hybrid perovskite devices is important for progress in their optoelectronic device performance. This work was funded by ONR Grant N00014-15-1-2524 at the Un. of Utah.
Bulmău C
2013-04-01
Full Text Available It is already known that heavy metals pollution causes important concern to human and ecosystem health. Heavy metals in soils at the European level represents 37.3% between main contaminates affecting soils (EEA, 2007. This paper illustrates results obtained in the framework of laboratory experiments concerning the evaluation of integrated time-temperature effect in pyrolysis process applied to contaminated soil by two different ways: it is about heavy metals historically contaminated soil from one of the most polluted areas within Romania, and artificially contaminated with PCB-containing transformer oil. In particular, the authors focused on a recent evaluation of pyrolysis efficiency on removing lead (Pb and cadmium (Cd from the contaminated soil. The experimental study evaluated two important parameters related to the studied remediation methodology: thermal process temperature and the retention time in reactor of the contaminated soils. The remediation treatments were performed in a rotary kiln reactor, taking into account three process temperatures (400°C, 600°C and 800°C and two retention times: 30 min. and 60 min. Completed analyses have focused on pyrolysis solids and gas products. Consequently, both ash and gas obtained after pyrolysis process were subjected to chemical analyses.
E. Dologlou
2012-08-01
Full Text Available The application of new data in the power law relation between the stress drop of the earthquake and the lead time of the precursory seismic electric signal led to an exponent which falls in the range of the values of critical exponents for fracture and it is in excellent agreement with a previous one found by (Dologlou, 2012. In addition, this exponent is very close to the one reported by Varotsos and Alexopoulos (1984a, which interconnects the amplitude of the precursory seismic electric signals (SES and the magnitude of the impending earthquake. Hence, the hypothesis that underlying dynamic processes evolving to criticality prevail in the pre-focal area when the SES is emitted is significantly supported.
Rubio-Marcos, F., E-mail: frmarcos@icv.csic.es [Electroceramic Department, Instituto de Ceramica y Vidrio, CSIC, Kelsen 5, 28049 Madrid (Spain); Marchet, P.; Merle-Mejean, T. [SPCTS, UMR 6638 CNRS, Universite de Limoges, 123, Av. A. Thomas, 87060 Limoges (France); Fernandez, J.F. [Electroceramic Department, Instituto de Ceramica y Vidrio, CSIC, Kelsen 5, 28049 Madrid (Spain)
2010-09-01
Lead-free KNN-modified piezoceramics of the system (Li,Na,K)(Nb,Ta,Sb)O{sub 3} were prepared by conventional solid-state sintering. The X-ray diffraction patterns revealed a perovskite phase, together with some minor secondary phase, which was assigned to K{sub 3}LiNb{sub 6}O{sub 17}, tetragonal tungsten-bronze (TTB). A structural evolution toward a pure tetragonal structure with the increasing sintering time was observed, associated with the decrease of TTB phase. A correlation between higher tetragonality and higher piezoelectric response was clearly evidenced. Contrary to the case of the LiTaO{sub 3} modified KNN, very large abnormal grains with TTB structure were not detected. As a consequence, the simultaneous modification by tantalum and antimony seems to induce during sintering a different behaviour from the one of LiTaO{sub 3} modified KNN.
Senatori, Ornella; Setini, Andrea; Scirocco, Annunziata; Nicotra, Antonietta
2009-06-01
The aim of this work was to verify, in two small size freshwater teleosts Danio rerio and Poecilia reticulata, the effects of short-time exposures (24 and 72 h) to a sublethal dose (500 microg/L) of nickel and lead, on brain monoamine oxidase (MAO), an important neural enzyme. The 24-h treatment using both metals caused a strong reduction of MAO activity in D. rerio brain, whereas causing a slight MAO activity stimulation in P. reticulata brain. The same treatment in both species did not affect the brain MAO mRNA production as showed by RT-PCR. Extending the duration of treatment as far as 72 h, partly (D. rerio) or completely (P. reticulata) reversed the metal effects on brain MAO activity suggesting that mechanisms to neutralize the metals had been activated.
Mohammadali Pirayesh Neghab
2014-01-01
Full Text Available In Iran Khodro Khorasan, inventory replenishment policy of some items is based on certain simple model (Wilson model. Sometimes because of some reasons, goods reception may not be achieved at the planned time. This state is called as critical situation because the production line is on the eve of stop. The purpose of this paper is to present different inventory replenishment policies in critical situation. For each policy, the cost function that includes cost of line stop, ordering cost, holding cost and transportation cost has been modeled and through its minimization, vehicle type and optimal order quantity have been determined. This model has been used numerically for two items of Iran Khodro Khorasan. Findings show that applying the proposed policy leads to saved cost compared to the current policy of Iran Khodro Khorasan.
Hardik N. Soni
2015-03-01
Full Text Available In this paper, an attempt has been made to develop a periodic review inventory model by considering lead-time and the backorder rate as control variables in fuzzy stochastic environment. Without loss of generality, we have assumed that all the observed values of the fuzzy random variable, representing the demand as triangular fuzzy numbers. The variance of fuzzy random demand is taken into consideration to give due attention to every fuzzy observations. The protection interval demand has also been assumed to be fuzzy stochastic. The expected shortages are calculated by using credibility criterion. For the proposed model, we provide a solution procedure incorporating numerical technique viz. Scan and zoom method to determine an optimal policy. A numerical example is taken up to illustrate the solution procedure and sensitivity analysis of the optimal solution with respect to the key parameters of the system is carried out.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Jansen, N.W.D.; Roosendaal, G.; Bijlsma, J.W.J.; Groot, J. de; Lafeber, F.P.J.G.
2007-01-01
Objective. Joint bleeding, or hemarthrosis, leads in time to severe joint damage. This study was carried out to test the in vitro thresholds of exposure time and concentration that lead to irreversible joint damage, to add to the discussion on the usefulness of aspiration of the joint after a hemorr
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Doyle, Chris
2014-01-01
The Vancouver 2010 Winter Olympics were held from 12 to 28 February 2010, and the Paralympic events followed 2 weeks later. During the Games, the weather posed a grave threat to the viability of one venue and created significant complications for the event schedule at others. Forecasts of weather with lead times ranging from minutes to days helped organizers minimize disruptions to sporting events and helped ensure all medal events were successfully completed. Of comparable importance, however, were the scenarios and forecasts of probable weather for the winter in advance of the Games. Forecasts of mild conditions at the time of the Games helped the Games' organizers mitigate what would have been very serious potential consequences for at least one venue. Snowmaking was one strategy employed well in advance of the Games to prepare for the expected conditions. This short study will focus on how operational decisions were made by the Games' organizers on the basis of both climatological and snowmaking forecasts during the pre-Games winter. An attempt will be made to quantify, economically, the value of some of the snowmaking forecasts made for the Games' operators. The results obtained indicate that although the economic value of the snowmaking forecast was difficult to determine, the Games' organizers valued the forecast information greatly. This suggests that further development of probabilistic forecasts for applications like pre-Games snowmaking would be worthwhile.
Rosangela Alves de Mendonça
2012-12-01
Full Text Available OBJETIVO: medir os limites do tempo máximo de fonação pré e pós-aplicação do Programa de Exercícios Funcionais Vocais de Stemple e Gerdeman em professoras, com e sem alteração vocal, que atuam no ensino fundamental do Município de Niterói-RJ. MÉTODO: participaram do estudo 17 professoras, que aceitaram participar espontaneamente da aplicação do programa de exercícios: vogal /i/ sustentada, glissando ascendente e descendente da palavra /nol/, e escala de tons musicais Dó,Ré,Mi,Fá,Sol, com emissão de /ol/, pelo tempo máximo de fonação. A medida do tempo foi coletada pré e pós-aplicação do programa por meio da vogal [ε], após a participante ter sido submetida a exame de videolaringoestroboscopia. RESULTADOS: verificou-se expressivo ganho do tempo máximo de fonação do pré para o pós-exercício e o valor do programa, que em sua aplicação, prioriza a execução dos exercícios com o maior tempo possível de fonação. CONCLUSÃO: o Programa de Exercícios Funcionais Vocais de Stemple e Gerdeman favoreceu o aumento do tempo máximo de fonação intrassujeito, possibilitando melhores condições de saúde vocal no desempenho profissional e social.PURPOSE: to measure the limits of the maximum phonation time pre and post application of the Stemple and Gerdeman Vocal Function Exercises Program in teachers of the Elementary School Education level in Niterói/Brazil, with or without voice alterations. METHOD: there were 17 female teachers who spontaneously agreed to participate. The exercise program that was applied consisted in the sustained vowel /i/, ascending and descending gliding on the word /nol/, and musical scale tones Do Re Mi Fa Sol - issuing the /ol/ for the maximum time of phonation. The measure of the maximum phonation time was counted pre and post exercise program through the vowel /ε/. RESULTS: the results revealed that the teachers presented an expressive increase in the maximum phonation and time
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Ma, Li; Li, Mei; Huang, Zhengxu; Li, Lei; Gao, Wei; Nian, Huiqing; Zou, Lilin; Fu, Zhong; Gao, Jian; Chai, Fahe; Zhou, Zhen
2016-07-01
Using a single particle aerosol mass spectrometer (SPAMS), the chemical composition and size distributions of lead (Pb)-containing particles with diameter from 0.1 μm to 2.0 μm in Beijing were analyzed in the spring of 2011 during clear, hazy, and dusty days. Based on mass spectral features of particles, cluster analysis was applied to Pb-containing particles, and six major classes were acquired consisting of K-rich, carboneous, Fe-rich, dust, Pb-rich, and Cl-rich particles. Pb-containing particles accounted for 4.2-5.3%, 21.8-22.7%, and 3.2% of total particle number during clear, hazy and dusty days, respectively. K-rich particles are a major contribution to Pb-containing particles, varying from 30.8% to 82.1% of total number of Pb-containing particles, lowest during dusty days and highest during hazy days. The results reflect that the chemical composition and amount of Pb-containing particles has been affected by meteorological conditions as well as the emissions of natural and anthropogenic sources. K-rich particles and carbonaceous particles could be mainly assigned to the emissions of coal combustion. Other classes of Pb-containing particles may be associated with metallurgical processes, coal combustion, dust, and waste incineration etc. In addition, Pb-containing particles during dusty days were first time studied by SPAMS. This method could provide a powerful tool for monitoring and controlling of Pb pollution in real time.
Vintzileos, A.; Halpert, M.; Gottschalk, J.; Allgood, A.
2016-12-01
Heatwaves are among the most dangerous, yet invisible, of natural hazards. According to NOAA, the distribution of 30-year based annual mean fatalities from natural hazards in the U.S. ranks as follows; those from heat (130), floods (81), tornadoes (70), lightning (48) and hurricanes (46). Early warning to excessive heat events can be improved by using multi-scale prognostic systems. We designed and developed such a system for forecasting excessive heat events at lead times beyond Week-1. This Subseasonal Excessive Heat Outlook System (SEHOS) consists of (a) a monitoring/verification component and (b) a forecasting component which in its baseline version uses NOAA's Global Ensemble Forecast System (GEFS) predictions of temperature and humidity from Day-8 to Day-14. In this presentation, we discuss the definition of heat events, sources of predictability and present the forecast skill of SEHOS for the GEFS reforecast period (1985-2014). We then use subseasonal reforecasts from several models from the S2S database and discuss the forecast value added by multi-model approaches in predicting excessive heat events.
Yao, Ping; Zheng, Botong; Dawood, Mina; Huo, Linsheng; Song, Gangbing
2017-03-01
This paper proposes a nondestructive method to evaluate the health status of resistance spot-welded (RSW) joint under service load using lead zirconate titanate (PZT) active sensing system, in which the PZT transducers were used as both actuator and sensor. The physical principle of the approach was validated through a numerical analysis showing that an opening between the faying faces at the welded joint occurred under tension load. The opening decreased the contact area hence reduced the amplitude of the stress wave received by the PZT sensor. Therefore, by comparing the energy index of the signals before and after the loading, the health condition of the joint can be evaluated. Five ST14 steel single lap joint specimens were tested under tension load while being monitored by the PZT sensing system and digital image correlation (DIC) system in real time. The data obtained from the DIC system validated the numerical results. By comparing the energy index of the signal obtained from the PZT sensing system before and after unloading, it was concluded that the RSW joint was intact after being loaded to the service load. The proposed method is promising in evaluating the health condition of RSW joint nondestructively.
Wahyu Adrianto
2016-04-01
Engine maintenance berusaha untuk selalu meningkatkan keunggulanlayanannya dengan tools berupa gate system dimana sistem tersebut diharapkandapat merealisasikan lead time selama 60 hari. Dalam implementasinya gatesystem tersebut masih belum dapat memenuhi target yang diharapkan. Selamaproses maintenance atau overhaul engine masih ditemui pemborosan atau wasteyang menyebabkan target tidak dapat terpenuhi. Lean Manufacturing merupakanpendekatan yang bertujuan untuk meminimasi pemborosan yang terjadi pada aliranproses. Pemahaman kondisi dari proses digambarkan dalam Value Stream Mappinguntuk selanjutnya dijabarkan aktivitas yang memiliki nilai tambah dan tidakmemiliki nilai tambah. Melalui seven waste concept, kemudian dilakukanpembobotan untuk mengetahui jenis waste yang paling dominan.Dari hasil pengolahan dalam Value Stream Mapping diketahui gate 1 dan gate 3merupakan titik yang banyak terdapat waste. Pembobotan dan pemeringkatanseven waste yang ada dalam aktivitas proses diperoleh hasil berupa urutan criticalwaste dari ketujuh waste yang ada. Bobot tertinggi yaitu pada jenis waste waitingdengan bobot sebesar 0.38. Hasil dari Root Cause Analysis diketahui bahwa akar penyebab dari waste waiting yaitu data yang tidak di-maintain, kurangnyaperhatian terhadap people development, Masih ditemukan adanya Bug pada sistempendukung proses dan adanya miskomunikasi antar bagian di engine maintenance. Kata kunci: lean, waste, value added, proses
Guse, Joanna A; Soufiani, Arman M; Jiang, Liangcong; Kim, Jincheol; Cheng, Yi-Bing; Schmidt, Timothy W; Ho-Baillie, Anita; McCamey, Dane R
2016-04-28
Elucidating the decay mechanisms of photoexcited charge carriers is key to improving the efficiency of solar cells based on organo-lead halide perovskites. Here we investigate the spectral dependence (via above-, inter- and sub-bandgap optical excitations) of direct and trap-mediated decay processes in CH3NH3PbI3 using time resolved microwave conductivity (TRMC). We find that the total end-of-pulse mobility is excitation wavelength dependent - the mobility is maximized (172 cm(2) V(-1) s(-1)) when charge carriers are excited by near bandgap light (780 nm) in the low charge carrier density regime (10(9) photons per cm(2)), and is lower for above- and sub-bandgap excitations. Direct recombination is found to occur on the 100-400 ns timescale across excitation wavelengths near and above the bandgap, whereas indirect recombination processes displayed distinct behaviour following above- and sub-bandgap excitations, suggesting the influence of different trap distributions on recombination dynamics.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Grova, Monica M; Yang, Anthony D; Humphries, Misty D; Galante, Joseph M; Salcedo, Edgardo S
2017-05-19
The surgical community commonly perceives a decline in surgical and patient care skills among residents who take dedicated time away from clinical activity to engage in research. We hypothesize that residents perceive a decline in their skills because of dedicated research time. UC Davis Medical Center, Sacramento, CA, an institutional tertiary care center. General surgery residents and graduates from UC Davis general surgery residency training program, who had completed at least 1 year of research during their training. A total of 35 people were asked to complete the survey, and 19 people submitted a completed survey. Participants were invited to complete an online survey. Factors associated with the decline in skills following their research years were examined. All statistical analyses were performed with IBM SPSS Statistics software. A total of 19 current or former general surgery residents responded to the survey (54% response rate). Overall, 42% described their research as "basic science." Thirteen residents (68%) dedicated 1 year to research, while the remainder spent 2 or more years. Basic science researchers were significantly more likely to report a decrease in clinical judgment (75% vs. 22%, p = 0.013) as well as a decrease in patient care skills (63% vs. 0%, p = 0.002). Residents who dedicated at least 2 years to research were more likely to perceive a decline in overall aptitude and surgical skills (100% vs. 46%, p = 0.02), and a decline in patient care skills (67% vs. 8%, p = 0.007). Most residents who dedicate time for research perceive a decline in their overall clinical aptitude and surgical skills. This can have a dramatic effect on the confidence of these residents in caring for patients and leading a care team once they re-enter clinical training. Residents who engaged in 2 or more years of research were significantly more likely to perceive these problems. Further research should determine how to keep residents who are interested in academics
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Liang, Ding; Ivanov, Kamen; Li, Huiqi; Ning, Yunkun; Zhang, Qi; Wang, Lei; Zhao, Guoru
2014-01-01
Research on falls in elderly people has a great social significance because of the rapidly growing of the aging population. The pre-impact lead time of fall (PLT) is an important part of the human fall theory. PLT is the longest time for a person who is going to fall to take action in order to prevent the fall or to reduce bodily injuries from the fall impact. However, there is no clear definition of PLT so far. There is also no comparative study for active and passive falls. In this study, we proposed a theoretical definition of the PLT, based on a new method of fall event division. We also compared the differences of PLT and the related angles between active and passive falls. Eight healthy adult subjects were arranged to perform three kinds of activities of daily living (sitting, walking and lying), and two kinds fall activities (active and passive) in three directions (forward, backward and lateral fall). Nine inertial sensor modules were used to measure the body segmental kinematic characteristics of each subject in our experimental activities. In this paper, a fall event was suggested to divide into three or four phases and then the critical phase could be divided into three periods (pre-impact, impact, and post-impact). Two fall models were developed for active and passive falls using acceleration data. The average value of PLT for active falls is about 514 ± 112 ms and it is smaller than the value for passive falls, which is 731 ± 104 ms. The longest PLTs were measured on the chest or waist instead of other locations, such as the thigh and shank. The PLTs of the three kinds of fall activities were slightly different, but there was a significant difference between two fall modes. The PLT showed the correlation to the body angle at the start of PLT, but it was uncorrelated at the end of PLT. The angles at the start of PLT had slight variations (passive forward falls (max 16 degrees) due to the self-control. The landing angles were significantly different in
Varas, M. I.; Orteu, E.; Laserna, J. A.
2014-07-01
This paper demonstrates the process followed in the preparation of the Manual of floods of Cofrentes NPP to identify the allowed maximum time available to the central in the isolation of a moderate or high energy pipe break, until it affects security (1E) participating in the safe stop of Reactor or in pools of spent fuel cooling-related equipment , and to determine the recommended isolation mode from the point of view of the location of the break or rupture, of the location of the 1E equipment and human factors. (Author)
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Rubin, Stephen P.; Reisenbichler, Reginald R.; Slatton, Stacey L.; Rubin, Stephen P.; Reisenbichler, Reginald R.; Wetzel, Lisa A.; Hayes, Michael C.
2012-01-01
The accuracy of a model that predicts time between fertilization and maximum alevin wet weight (MAWW) from incubation temperature was tested for steelhead Oncorhynchus mykiss from Dworshak National Fish Hatchery on the Clearwater River, Idaho. MAWW corresponds to the button-up fry stage of development. Embryos were incubated at warm (mean=11.6°C) or cold (mean=7.3°C) temperatures and time between fertilization and MAWW was measured for each temperature. Model predictions of time to MAWW were within 1% of measured time to MAWW. Mean egg weight ranged from 0.101-0.136 g among females (mean = 0.116). Time to MAWW was positively related to egg size for each temperature, but the increase in time to MAWW with increasing egg size was greater for embryos reared at the warm than at the cold temperature. We developed equations accounting for the effect of egg size on time to MAWW for each temperature, and also for the mean of those temperatures (9.3°C).
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Piao, Daqing; Holyoak, G Reed; Patel, Sanjay
2016-01-01
We demonstrate a laparoscopic applicator probe and a method thereof for real-time en-face topographic mapping of near-surface heterogeneity for potential use in intraoperative margin assessment during minimally invasive oncological procedures. The probe fits in a 12mm port and houses at its maximum 128 copper-coated 750um fibers that form radially alternating illumination (70 fibers) and detection (58 fibers) channels. By simultaneously illuminating the 70 source channels of the probe that is in contact with a scattering medium and concurrently measuring the light diffusely propagated to the 58 detector channels, the presence of near-surface optical heterogeneities can be resolved in an en-face 9.5mm field-of-view in real-time. Visualization of a subsurface margin of strong attenuation contrast at a depth up to 3mm is demonstrated at one wavelength at a frame rate of 1.25Hz.
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
刘圣波; 刘贺; 赵燕东
2013-01-01
为了提高光伏太阳能转换率，拓展传统纹波控制技术的应用，该文提出了离散时间纹波控制算法，通过对纹波控制技术的离散化处理，将最大功率点跟踪控制问题转换为离散采样-控制问题。以太阳能板输出电压为状态量，在其处于极大值和极小值时对系统进行采样；随后采取离散时间纹波控制算法使系统快速追踪到系统的最大功率点。该文在Simulink系统中对离散时间纹波控制算法进行了仿真。仿真结果表明，在1000和200 W/cm2，25℃的条件下，算法均可以快速准确地追踪到太阳能系统的最大功率点，追踪精度高达96%；在外部环境由1000变为200 W/cm2时，系统能够在0.1 s内准确地追踪到新的最大功率点。%Solar photovoltaic technology has been widely used in modern agriculture. Due to the volatility of solar power, it is hard to maximize the use of solar energy. In order to seek a way to improve the conversion rate of photovoltaic solar panels, this paper developed a new algorithm to utilize solar energy more efficiently. Since tracking solar maximum power point is a valid method to maintain the solar panel power output at a high level, at this paper, we choose ripple correlation control (RCC) to keep tracking the maximum power point of a solar photovoltaic (PV) system. Ripple correlation control is a real-time optimal method particularly suitable for power convertor control. The objective of RCC in solar PV system is to maximize the energy quantity. This paper extended the traditional analog RCC technique to the digital domain. With discretization and simplifications of math model, the RCC method can be transformed to a sampling problem. The control method shows that when the solar PV system reaches the maximum power point, power outputs at both maximum and minimum state should be nearly the same. Moreover, since voltage output of a system is easy to observe and directly related to power
Time Trek: a 13.7 km long nature trail leading through the history of the Universe and the Earth
Lehto, Kirsi; Lehto, Harry J.; Brozinski, Ari; Gardner, Esko; Eklund, Olav; Rajala, Kirsi; Räsänen, Matti; Sääksjärvi, Ilari; Vainio, Laura; Vuorisalo, Timo
2013-01-01
With the aim to visualize the span of time since the formation of our Universe we have set up a nature and hiking trail called `Time Trek'. The 13.7 km length of the trail corresponds to the age of the Universe, and portrays its history including events important for Earth and life. One kilometre corresponds to a billion years, and one metre to a million years of time. The trek combines astronomical, physical, geological and biological time lines, and presents a holistic view of the history of time. It helps people to comprehend the causal and temporal connections of different phenomena. To the trekker, it offers a concrete experience of the lengths and proportions of different time periods, which otherwise are very difficult to understand.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Ohnaka, K.; Weigelt, G.; Hofmann, K.-H.
2017-01-01
Aims: Our recent visible polarimetric images of the well-studied AGB star W Hya taken at pre-maximum light (phase 0.92) with VLT/SPHERE-ZIMPOL have revealed clumpy dust clouds close to the star at 2 R⋆. We present second-epoch SPHERE-ZIMPOL observations of W Hya at minimum light (phase 0.54) as well as high-spectral resolution long-baseline interferometric observations with the AMBER instrument at the Very Large Telescope Interferometer (VLTI). Methods: We observed W Hya with VLT/SPHERE-ZIMPOL at three wavelengths in the continuum (645, 748, and 820 nm), in the Hα line at 656.3 nm, and in the TiO band at 717 nm. The VLTI/AMBER observations were carried out in the wavelength region of the CO first overtone lines near 2.3 μm with a spectral resolution of 12 000. Results: The high-spatial resolution polarimetric images obtained with SPHERE-ZIMPOL have allowed us to detect clear time variations in the clumpy dust clouds as close as 34-50 mas (1.4-2.0 R⋆) to the star. We detected the formation of a new dust cloud as well as the disappearance of one of the dust clouds detected at the first epoch. The Hα and TiO emission extends to 150 mas ( 6 R⋆), and the Hα images obtained at two epochs reveal time variations. The degree of linear polarization measured at minimum light, which ranges from 13 to 18%, is higher than that observed at pre-maximum light. The power-law-type limb-darkened disk fit to the AMBER data in the continuum results in a limb-darkened disk diameter of 49.1 ± 1.5 mas and a limb-darkening parameter of 1.16 ± 0.49, indicating that the atmosphere is more extended with weaker limb-darkening compared to pre-maximum light. Our Monte Carlo radiative transfer modeling shows that the second-epoch SPHERE-ZIMPOL data can be explained by a shell of 0.1 μm grains of Al2O3, Mg2SiO4, and MgSiO3 with a 550 nm optical depth of 0.6 ± 0.2 and an inner and outer radii of 1.3 R⋆ and 10 ± 2R⋆, respectively. Our modeling suggests the predominance of small (0
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Mary Hokazono
Full Text Available CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD detects stroke risk among children with sickle cell anemia (SCA. Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD genotypes and correlate the time-averaged maximum mean (TAMM velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia; and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia. TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05. RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6% and five conditional results (8.1% in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics.
Elimination of Airborne Lead Contamination from Caliber .22 Ammunition.
1987-06-01
composition is comprised of the toxic heavy metals: lead and barium . The barium , however, is considered a minor contributor to the contamination...lead styphnate priming mixture and 2.9 grains of Winchester (WC371) propellant. This propellant quantity resulted in near the maximum recommended...standard lead styphnate primer mixture to four times the volume of the lead styphnate primer mixture. The amount of new primer mixture which produced
Hu, Kaifeng; Ellinger, James J; Chylla, Roger A; Markley, John L
2011-12-15
Time-zero 2D (13)C HSQC (HSQC(0)) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC(0) spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero (1)H-(13)C HSQC(0) in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant-time mode. Semiautomatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semiautomated gsHSQC(0) with those obtained by the original manual phase-cycled HSQC(0) approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture.
Pontiggia, Luca; Klar, Agnieszka; Böttcher-Haberzeth, Sophie; Biedermann, Thomas; Meuli, Martin; Reichmann, Ernst
2013-03-01
Autologous dermo-epidermal skin substitutes (DESS) generated in vitro represent a promising therapeutic means to treat full-thickness skin defects in clinical practice. A serious drawback with regard to acute patients is the relatively long production time of 3-4 weeks. With this experimental study we aimed to decrease the production time of DESS without compromising their quality. Two in vitro steps of DESS construction were varied: the pre-cultivation time of fibroblasts in hydrogels (1, 3, and 6 days), and the culture time of keratinocytes (3, 6, and 12 days) before transplantation of DESS on nude rats. Additionally, the impact of the air-liquid interface culture during 3 days before transplantation was investigated. 3 weeks after transplantation, the macroscopic appearance was evaluated and histological sections were produced to analyze structure and thickness of epidermis and dermis, the stratification of the epidermis, and the presence of a basal lamina. Optimal DESS formation was obtained with a fibroblast pre-cultivation time of 6 days. The minimal culture time of keratinocytes on hydrogels was also 6 days. The air-liquid interface culture did not improve graft quality. By optimizing our in vitro culture conditions, it was possible to very substantially reduce the production time for DESS from 21 to 12 days. However, pre-cultivation of fibroblasts in the dermal equivalent and proliferation of keratinocytes before transplantation remain crucial for an equilibrated maturation of the epidermis and cannot be completely skipped.
Minimum Length - Maximum Velocity
Panes, Boris
2011-01-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example we can predict the ratio between the minimum lengths in space and time using the results from OPERA about superluminal neutrinos.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
程刘胜
2015-01-01
在合理布局井下无线网络基站的基础上，提出了一种基于多载波时频迭代的最大似然TOA（Time of Arrival）估计算法，通过将小数延时不断迭代来缩小估计误差，确定合适搜索步长，实现对信号的精确TOA估计。仿真结果表明：时频迭代的最大似然TOA估计算法具有更快的收敛速度；在信噪比较小时，采用时频迭代的最大似然TOA估计算法比经典TOA估计算法有效地提高了估计精度。%The influence of underground multipath, non-line of sight and the network time synchronization accuracy cause that delayed arrival time estimation deviation is bigger in the mining UWB high accuracy position system. This paper proposes a maximum likelihood TOA estimation algorithm based on multi-carrier time-frequency iteration by rationally distributing the underground wireless base stations to conform a suitable searching step length and find the exact TOA approximation estimation to the signal via fractional delay iterated to narrow the estimation error. The result shows that the time frequency iteration TOA estimation has a faster rate of convergence than the non-iteration algorithm.
Zhang, Huan; Song, Donghan; An, Lina
2016-05-01
To study the effect of a real-time tele-transmission system of 12-lead electrocardiogram on door-to-balloon time in athletes with ST-elevation myocardial infarction. A total of 60 athletes with chest pain diagnosed as ST-elevation myocardial infarction (STEMI) from our hospital were randomly divided into group A (n=35) and group B (n=25), the patients in group A transmitted the real-time tele-transmission system of 12-lead electrocardiogram to the chest pain center before arriving in hospital, however, the patients in group B not. The median door-to-balloon time was significant shorter in-group A than group B (38min vs 94 min, p0.05). The median length of stay was significant reduced in-group A (5 days vs 7 days, pelectrocardiogram is beneficial to the pre-hospital diagnosis of STEMI.
Vedia, V., E-mail: mv.vedia@ucm.es [Grupo de Física Nuclear, Facultad de CC. Físicas, Universidad Complutense, CEI Moncloa, ES-28040 Madrid (Spain); Mach, H. [Grupo de Física Nuclear, Facultad de CC. Físicas, Universidad Complutense, CEI Moncloa, ES-28040 Madrid (Spain); National Centre for Nuclear Research, Division for Nuclear Physics, BP1, PL-00-681 Warsaw (Poland); Fraile, L.M.; Udías, J.M. [Grupo de Física Nuclear, Facultad de CC. Físicas, Universidad Complutense, CEI Moncloa, ES-28040 Madrid (Spain); Lalkovski, S. [Faculty of Physics, University of Sofia, St. Kliment Ohridski, BG-1164 Sofia (Bulgaria)
2015-09-21
We have characterized in depth the time response of three detectors equipped with cylindrical LaBr{sub 3}(Ce) crystals with dimensions of 1-in. in height and 1-in. in diameter, and having nominal Ce doping concentration of 5%, 8% and 10%. Measurements were performed at {sup 60}Co and {sup 22}Na γ-ray energies against a fast BaF{sub 2} reference detector. The time resolution was optimized by the choice of the photomultiplier bias voltage and the fine tuning of the parameters of the constant fraction discriminator, namely the zero-crossing and the external delay. We report here on the optimal time resolution of the three crystals. It is observed that timing properties are influenced by the amount of Ce doping and the crystal homogeneity. For the crystal with 8% of Ce doping the use of the ORTEC 935 CFD at very shorts delays in addition to the Hamamatsu R9779 PMT has made it possible to improve the LaBr{sub 3}(Ce) time resolution from the best literature value at {sup 60}Co photon energies to below 100 ps.
Demirel, M.C.; Booij, Martijn J.; Hoekstra, Arjen Ysbert
2013-01-01
The aim of this paper is to assess the relative importance of low flow indicators for the River Rhine and to identify their appropriate temporal lag and resolution. This is done in the context of low flow forecasting with lead times of 14 and 90 days. First, the Rhine basin is subdivided into seven
Gautam, R; Boerlage, A S; Vanderstichel, R; Revie, C W; Hammell, K L
2016-11-01
Treatment efficacy studies typically use pre-treatment sea lice abundance as the baseline. However, the pre-treatment counting window often varies from the day of treatment to several days before treatment. We assessed the effect of lead time on baseline estimates, using historical data (2010-14) from a sea lice data management programme (Fish-iTrends). Data were aggregated at the cage level for three life stages: (i) chalimus, (ii) pre-adult and adult male and (iii) adult female. Sea lice counts were log-transformed, and mean counts by lead time relative to treatment day were computed and compared separately for each life stage, using linear mixed models. There were 1,658 observations (treatment events) from 56 sites in 5 Bay Management Areas. Our study showed that lead time had a significant effect on the estimated sea lice abundance, which was moderated by season. During the late summer and autumn periods, counting on the day of treatment gave significantly higher values than other days and would be a more appropriate baseline estimate, while during spring and early summer abundance estimates were comparable among counts within 5 days of treatment. A season-based lead time window may be most appropriate when estimating baseline sea lice levels.
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
Ohnaka, Keiichi; Hofmann, Karl-Heinz
2016-01-01
Our recent visible polarimetric images of the well-studied AGB star W Hya taken at pre-maximum light (phase 0.92) with VLT/SPHERE-ZIMPOL have revealed clumpy dust clouds close to the star at ~2 Rstar. We present second-epoch SPHERE-ZIMPOL observations of W Hya at minimum light (phase 0.54) in the continuum (645, 748, and 820 nm), in the Halpha line (656.3 nm), and in the TiO band (717 nm) as well as high-spectral resolution long-baseline interferometric observations in 2.3 micron CO lines with the AMBER instrument at the Very Large Telescope Interferometer (VLTI). The high-spatial resolution polarimetric images have allowed us to detect clear time variations in the clumpy dust clouds as close as 34--50~mas (1.4--2.0 Rstar) to the star. We detected the formation of a new dust cloud and the disappearance of one of the dust clouds detected at the first epoch. The Halpha and TiO emission extends to ~150 mas (~6 Rstar), and the Halpha images reveal time variations. The degree of linear polarization is higher at mi...
Pugatch, Rami
2015-02-24
Bacterial self-replication is a complex process composed of many de novo synthesis steps catalyzed by a myriad of molecular processing units, e.g., the transcription-translation machinery, metabolic enzymes, and the replisome. Successful completion of all production tasks requires a schedule-a temporal assignment of each of the production tasks to its respective processing units that respects ordering and resource constraints. Most intracellular growth processes are well characterized. However, the manner in which they are coordinated under the control of a scheduling policy is not well understood. When fast replication is favored, a schedule that minimizes the completion time is desirable. However, if resources are scarce, it is typically computationally hard to find such a schedule, in the worst case. Here, we show that optimal scheduling naturally emerges in cellular self-replication. Optimal doubling time is obtained by maintaining a sufficiently large inventory of intermediate metabolites and processing units required for self-replication and additionally requiring that these processing units be "greedy," i.e., not idle if they can perform a production task. We calculate the distribution of doubling times of such optimally scheduled self-replicating factories, and find it has a universal form-log-Frechet, not sensitive to many microscopic details. Analyzing two recent datasets of Escherichia coli growing in a stationary medium, we find excellent agreement between the observed doubling-time distribution and the predicted universal distribution, suggesting E. coli is optimally scheduling its replication. Greedy scheduling appears as a simple generic route to optimal scheduling when speed is the optimization criterion. Other criteria such as efficiency require more elaborate scheduling policies and tighter regulation.
Jasial, Swarit; Hu, Ye; Bajorath, Jürgen
2016-02-22
The increase in compounds with activity against five major therapeutic target families has been quantified on a time scale and investigated employing a compound-scaffold-cyclic skeleton (CSK) hierarchy. The analysis was designed to better understand possible reasons for target-dependent growth of bioactive compounds. There was strong correlation between compound and scaffold growth across all target families. Active compounds becoming available over time were mostly represented by new scaffolds. On the basis of scaffold-to-compound ratios, new active compounds were structurally diverse and, on the basis of CSK-to-scaffold ratios, often had previously unobserved topologies. In addition, novel targets emerged that complemented major families. The analysis revealed that compound growth is associated with increasing chemical diversity and that current pharmaceutical targets are capable of recognizing many structurally different compounds, which provides a rationale for the rapid increase in the number of bioactive compounds over the past decade. In light of these findings, it is likely that new chemical entities will be discovered for many small molecule targets including relatively unexplored ones as well as for popular and well-studied therapeutic targets. Moreover, given the wealth of new "active scaffolds" that have been increasingly identified for many targets over time, computational scaffold-hopping exercises should generally have a high likelihood of success.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
Vidalis, M.
2012-01-01
Full Text Available In this work a two echelon merge supply chain is examined. Specifically, two non identical reliable suppliers feed a distribution centre with a shared buffer. The first echelon consists of the distribution centre and the shared buffer, the second echelon includes two non identical reliable suppliers. There is an unlimited supply of materials to suppliers and an unlimited capacity shipping area after the distribution centre. In other words, suppliers are never starved, and the distribution centre is never blocked. The materials are processed by suppliers with rates following the Erlang distribution. The distribution centre has a reliable machine that pushes material with service times following the Erlang distribution. Blocking appears when one or more suppliers finish their process and try to feed the buffer that is full. The supply network is modelled as a continuous time Markov process with discrete states. The structures of the transition matrices of those systems are explored and a computational algorithm is developed. Our aim is to generate stationary distributions for different values of system's parameters so as the various measures of the system can be estimated. Finally, for the mathematical programming model and the rest of the calculations the Matlab software is used.
Reddy, Aravind; Braun, Charles L.
2010-01-01
Lead poisoning has been a problem since early history and continues into modern times. An appealing characteristic of lead is that many lead salts are sweet. In the absence of cane and beet sugars, early Romans used "sugar of lead" (lead acetate) to sweeten desserts, fruits, and sour wine. People most at risk would have been those who…
Reddy, Aravind; Braun, Charles L.
2010-01-01
Lead poisoning has been a problem since early history and continues into modern times. An appealing characteristic of lead is that many lead salts are sweet. In the absence of cane and beet sugars, early Romans used "sugar of lead" (lead acetate) to sweeten desserts, fruits, and sour wine. People most at risk would have been those who consumed the…
Tsikrika, Konstantina; Lemos, M. Adília; Chu, Boon-Seang; Bremner, David H.; Hungerford, Graham
2017-02-01
The application of ultrasound to a solution can induce cavitional phenomena and generate high localised temperatures and pressures. These are dependent of the frequency used and have enabled ultrasound application in areas such as synthetic, green and food chemistry. High frequency (100 kHz to 1 MHz) in particular is promising in food chemistry as a means to inactivate enzymes, replacing the need to use periods of high temperature. A plant enzyme, horseradish peroxidase, was studied using time-resolved fluorescence techniques as a means to assess the effect of high frequency (378 kHz and 583 kHz) ultrasound treatment at equivalent acoustic powers. This uncovered the fluorescence emission from a newly formed species, attributed to the formation of di-tyrosine within the horseradish peroxidase structure caused by auto-oxidation, and linked to enzyme inactivation.
Stackelberg Decision in a Supply Chain Based on Controllable Lead Time%基于可控交货期的供应链Stackelberg决策研究
何景师; 戴航; 张智勇
2012-01-01
Time is a bottle-neck element of the supply chain agile operation in practice. In Stackelberg model of a supply chain between the upstream and downstream stages, a supply chain model based on controllable lead time was constructed. Through calculation of the supply chain model, this paper analyzed the influence lead time crashing cost sharing and the delay penalty impact on the lead time, order quantity and supply cost. It is pointed out that optimal lead time and order quantity decision-making existed in a supply chain, which could get the supply chain optimal decision. The result from an example revealed that the model could optimize the decision making of the supply chain upstream and downstream enterprises. Therefore, it provided a piece of scientific evidence for the planners to develop effective solutions.%时间已成为供应链敏捷运作的瓶颈要素。在供应商和分销商供应链上下游的Stackelberg模型中，本文构建了基于可变交货期的供应链决策模型。通过对供应链决策模型的计算，分析了考虑压缩交货期费用分担和延迟交货惩罚对交货期、订货量、供应链成本的影响，认为在供应商和分销商中存在最优的交货期和订货批量，可得到供应链最优的决策。算例的计算结果表明，该模型可以优化供应链上下游企业的决策，从而为规划决策者提供科学依据。
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Probabilistic maximum-value wind prediction for offshore environments
Staid, Andrea; Pinson, Pierre; Guikema, Seth D.
2015-01-01
, and probabilistic forecasts result in greater value to the end-user. The models outperform traditional baseline forecast methods and achieve low predictive errors on the order of 1–2 m s−1. We show the results of their predictive accuracy for different lead times and different training methodologies....... statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...
Jin F
2016-05-01
Full Text Available Feng Jin,1,2 Hui Zhu,2 Zheng Fu,3 Li Kong,2 Jinming Yu2 1School of Medicine and Life Sciences, University of Jinan-Shandong Academy of Medical Sciences, 2Department of Radiation Oncology, Shandong Cancer Hospital Affiliated to Shandong University, Shandong Academy of Medical Sciences, 3Department of Nuclear Medicine, Shandong Cancer Hospital Affiliated to Shandong University, Shandong Academy of Medical Sciences, Jinan, People’s Republic of China Purpose: The purpose of this study was to investigate the prognostic value of the standardized uptake value maximum (SUVmax change calculated by dual-time-point 18F-fluorodeoxyglucose positron emission tomography (PET imaging in patients with advanced non-small-cell lung cancer (NSCLC.Patients and methods: We conducted a retrospective review of 115 patients with advanced NSCLC who underwent pretreatment dual-time-point 18F-fluorodeoxyglucose PET acquired at 1 and 2 hours after injection. The SUVmax from early images (SUVmax1 and SUVmax from delayed images (SUVmax2 were recorded and used to calculate the SUVmax changes, including the SUVmax increment (ΔSUVmax and percent change of the SUVmax (%ΔSUVmax. Progression-free survival (PFS and overall survival (OS were determined by the Kaplan–Meier method and were compared with the studied PET parameters, and the clinicopathological prognostic factors in univariate analyses and multivariate analyses were constructed using Cox proportional hazards regression.Results: One hundred and fifteen consecutive patients were reviewed, and the median follow-up time was 12.5 months. The estimated median PFS and OS were 3.8 and 9.6 months, respectively. In univariate analysis, SUVmax1, SUVmax2, ΔSUVmax, %ΔSUVmax, clinical stage, and Eastern Cooperative Oncology Group (ECOG scores were significant prognostic factors for PFS. Similar results were significantly correlated with OS, except %ΔSUVmax. In multivariate analysis, ΔSUVmax and %ΔSUVmax were significant
The problem of the maximum volumes and particle horizon in the Friedmann universe model
Gong, S. M.
1989-08-01
The maximum volume of the closed Friedmann universe is further investigated and is shown to be 2 x pi squared x R cubed (t), instead of pi squared x R cubed (t) as found previously. This discrepancy comes from the incomplete use of the volume formula of 3-dimensional spherical space in the astronomical literature. Mathematically, the maximum volume exists at any cosmic time t in a 3-dimensional spherical case. However, the Friedmann closed universe in expansion reaches its maximum volume only at the time of the maximum scale factor. The particle horizon has no limitation for the farthest objects in the closed Friedmann universe if the proper distance of objects is compared with the particle horizon as is should be. This leads to absurdity if the luminosity distance of objects is compared with the proper distance of the particle horizon.
Lead is a metal that occurs naturally in the earth's crust. Lead can be found in all parts of our ... from human activities such as mining and manufacturing. Lead used to be in paint; older houses may ...
Anderson, S.; Tootle, G.; Parkinson, S.; Holbrook, P.; Blestrud, D.
2012-12-01
Water managers and planners in the western United States are challenged with managing resources for various uses, including hydropower. Hydropower is especially important throughout the Upper Snake River Basin, where a series of hydropower projects provide a low cost renewable energy source to the region. These hydropower projects include several dams that are managed by Idaho Power Company (IPC). Planners and managers rely heavily on forecasts of snowpack and precipitation to plan for hydropower availability and the need for other generation sources. There is a pressing need for improved snowpack and precipitation forecast models in the Upper Snake River Basin. This research investigates the ability of Pacific oceanic-atmospheric data and climatic variables to provide skillful long lead-time (three to nine months) forecasts of snowpack and precipitation, and examines the benefits of segregating the warm and cold phases of the Pacific Decadal Oscillation (PDO) to reduce the temperature variability within the target dataset. Singular value decomposition (SVD) was used to identify regions of Pacific Ocean sea surface temperatures (SST) and 500mbar geopotential heights (Z500) for various lead times (three, six, and nine months) that were teleconnected with snowpack and precipitation stations in Upper Snake River Basin headwaters. The identified Pacific Ocean SST and Z500 regions were used to create indices that became predictors in a non-parametric forecasting model. The majority of forecasts resulted in positive statistical skill, which indicated an improvement of the forecast over the climatology forecast (no-skill forecast). The results from the forecasts models indicated that derived indices from the SVD analysis resulted in improved forecast skill when compared to forecasts using established climate indices. Segregation of the cold phase PDO years resulted in the identification of different regions in the Pacific Ocean and vastly improved skill for the nine month
Muryshev, Kirill E.; Eliseev, Alexey V.; Mokhov, Igor I.; Timazhev, Alexandr V.
2017-01-01
By employing an Earth system model of intermediate complexity (EMIC) developed at the A.M. Obukhov Institute of Atmospheric Physics, Russian Academy of Sciences (IAP RAS CM), mutual lags between global mean surface air temperature, T and the atmospheric CO2 content, q, in dependence of the type and time scale of the external forcing are explored. In the simulation, which follows the protocol of the Coupled Models Intercomparison Project, phase 5, T leads q for volcanically-induced climate variations. In contrast, T lags behind q for changes caused by anthropogenic CO2 emissions into the atmosphere. In additional idealized numerical experiments, driven by periodic external emissions of carbon dioxide into the atmosphere, T always lags behind q as expected. In contrast, if the model is driven by the periodic non-greenhouse radiative forcing, T leads q for the external forcing time scale ≤4 ×102 yr, while q leads T at longer scales. The latter is an example that lagged correlations do not necessarily represent causal relationships in a system. This apparently counter-intuitive result, however, is a direct consequence of i) temperature sensitivity of the soil carbon stock (which decreases if climate is warmed and increases if climate is cooled), ii) conservation of total mass of carbon in the system in the absence of external carbon emissions, iii) increased importance of the oceanic branch of the carbon cycle at longer time scales. The results obtained with an EMIC are further interpreted with a conceptual Earth system model consisting of an energy balance climate model and a globally averaged carbon cycle model. The obtained results have implications to the empirical studies attempting to understand the origins of the contemporary climate change by applying lead-lag relationships to empirical data.
Hodgson, Dominic A.; Graham, Alastair G. C.; Roberts, Stephen J.; Bentley, Michael J.; Cofaigh, Colm Ó.; Verleyen, Elie; Vyverman, Wim; Jomelli, Vincent; Favier, Vincent; Brunstein, Daniel; Verfaillie, Deborah; Colhoun, Eric A.; Saunders, Krystyna M.; Selkirk, Patricia M.; Mackintosh, Andrew; Hedding, David W.; Nel, Werner; Hall, Kevin; McGlone, Matt S.; Van der Putten, Nathalie; Dickens, William A.; Smith, James A.
2014-09-01
This paper is the maritime and sub-Antarctic contribution to the Scientific Committee for Antarctic Research (SCAR) Past Antarctic Ice Sheet Dynamics (PAIS) community Antarctic Ice Sheet reconstruction. The overarching aim for all sectors of Antarctica was to reconstruct the Last Glacial Maximum (LGM) ice sheet extent and thickness, and map the subsequent deglaciation in a series of 5000 year time slices. However, our review of the literature found surprisingly few high quality chronological constraints on changing glacier extents on these timescales in the maritime and sub-Antarctic sector. Therefore, in this paper we focus on an assessment of the terrestrial and offshore evidence for the LGM ice extent, establishing minimum ages for the onset of deglaciation, and separating evidence of deglaciation from LGM limits from those associated with later Holocene glacier fluctuations. Evidence included geomorphological descriptions of glacial landscapes, radiocarbon dated basal peat and lake sediment deposits, cosmogenic isotope ages of glacial features and molecular biological data. We propose a classification of the glacial history of the maritime and sub-Antarctic islands based on this assembled evidence. These include: (Type I) islands which accumulated little or no LGM ice; (Type II) islands with a limited LGM ice extent but evidence of extensive earlier continental shelf glaciations; (Type III) seamounts and volcanoes unlikely to have accumulated significant LGM ice cover; (Type IV) islands on shallow shelves with both terrestrial and submarine evidence of LGM (and/or earlier) ice expansion; (Type V) Islands north of the Antarctic Polar Front with terrestrial evidence of LGM ice expansion; and (Type VI) islands with no data. Finally, we review the climatological and geomorphological settings that separate the glaciological history of the islands within this classification scheme.
Jia, Qianjun; Chen, Ziman; Jiang, Xianxian; Zhao, Zhenjun; Huang, Meiping; Li, Jiahua; Zhuang, Jian; Liu, Xiaoqing; Hu, Tianyu; Liang, Wensheng
2017-02-01
Operator radiation and the radiation protection efficacy of a ceiling-suspended lead screen were assessed during coronary angiography (CA) in a catheterization laboratory. An anthropomorphic phantom was placed under the X-ray beam to simulate patient attenuation in eight CA projections. Using real-time dosimeters, radiation dose rates were measured on models mimicking a primary operator (PO) and an assistant. Subsequently, a ceiling-suspended lead screen was placed in three commonly used positions to compare the radiation protection efficacy. The radiation exposure to the PO was 2.3 to 227.9 (mean: 67.2 ± 49.0) μSv/min, with the left anterior oblique (LAO) 45°/cranial 25° and cranial 25° projections causing the highest and the lowest dose rates, respectively. The assistant experienced significantly less radiation overall (mean: 20.1 ± 19.6 μSv/min, P shielding, the ceiling-suspended lead screen reduced the radiation to the PO by 76.8%, 81.9% and 93.5% when placed close to the patient phantom, at the left side and close to the PO, respectively, and reduced the radiation to the assistant by 70.3%, 76.7% and 90.0%, respectively. When placed close to the PO, a ceiling-suspended lead screen provides substantial radiation protection during CA.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Andrey Domingues de Lima
2013-03-01
Full Text Available A administração eficiente e eficaz do lead time pode criar vantagens competitivas para as empresas. Uma abordagem que possui a preocupação de reduzir o lead time é a Manufatura de Resposta Rápida (QRM - Quick Response Manufacturing. O objetivo do presente trabalho é propor a aplicação de princípios e técnicas do QRM para redução do lead time do processo de orçamentação de uma empresa fabricante de materiais de escrita situada no Brasil. Para tanto, utilizou-se a pesquisa teórico-conceitual e o estudo de caso como procedimentos de pesquisa. Verificou-se que a empresa estudada poderá obter vantagens significativas com a adoção desta abordagem. Os resultados esperados por meio da implantação da proposta mostram uma redução de 38,1% no lead time total do processo de orçamentação da empresa estudada. Esse artigo contribui para maior divulgação e compreensão da abordagem QRM e de seus benefícios.The efficient and effective management of lead-time can create competitive advantages for companies. One approach that has the preoccupation to reduce lead-time is the Quick Response Manufacturing. The aim of this paper is to propose the implementation of principles and techniques of QRM for lead-time reduction in the budgeting process of an enterprise of writing materials (general writing located in Brazil. To this end, the theoretical-conceptual research and case study were used as research procedures. It was found that the studied company could obtain significant benefits by adopting this approach. The expected results through the implementation of the proposal show a decrease of 38.1% in the total lead-time of the budgeting process of the company studied. This paper contributes to a greater dissemination and understanding of the QRM approach and its benefits.
Andrey Domingues de Lima
2012-01-01
Full Text Available A administração eficiente e eficaz do lead time pode criar vantagens competitivas para as empresas. Uma abordagem que possui a preocupação de reduzir o lead time é a Manufatura de Resposta Rápida (QRM - Quick Response Manufacturing. O objetivo do presente trabalho é propor a aplicação de princípios e técnicas do QRM para redução do lead time do processo de orçamentação de uma empresa fabricante de materiais de escrita situada no Brasil. Para tanto, utilizou-se a pesquisa teórico-conceitual e o estudo de caso como procedimentos de pesquisa. Verificou-se que a empresa estudada poderá obter vantagens significativas com a adoção desta abordagem. Os resultados esperados por meio da implantação da proposta mostram uma redução de 38,1% no lead time total do processo de orçamentação da empresa estudada. Esse artigo contribui para maior divulgação e compreensão da abordagem QRM e de seus benefícios.The efficient and effective management of lead-time can create competitive advantages for companies. One approach that has the preoccupation to reduce lead-time is the Quick Response Manufacturing. The aim of this paper is to propose the implementation of principles and techniques of QRM for lead-time reduction in the budgeting process of an enterprise of writing materials (general writing located in Brazil. To this end, the theoretical-conceptual research and case study were used as research procedures. It was found that the studied company could obtain significant benefits by adopting this approach. The expected results through the implementation of the proposal show a decrease of 38.1% in the total lead-time of the budgeting process of the company studied. This paper contributes to a greater dissemination and understanding of the QRM approach and its benefits.
Horst Koch
2013-03-01
Full Text Available The high speed of biological processes such as photosynthesis, enzymatic reactions or neuronal activity cannot completely be explained on the basis of classic physical approaches. Different quantum biology effects such as tunnelling have been postulated. We hypothetically admit that deceleration of electron velocity based on light-particle duality of electrons leads to time acceleration. Deceleration from the status of light towards a status of a particle may therefore speed up biochemical or biophysical reactions in the atomic or molecular dimension. Electrophysiological and biological phenomena are discussed on the basis of the hypothesis
Miguel Afonso Sellitto
2008-04-01
Full Text Available Este artigo apresenta um método para a medição do tempo de atravessamento e do inventário em processo em manufatura. A medição foi usada em processo de controle organizacional de manufatura, cuja variável controlada foi o tempo de atravessamento de ordens. Tal controle pode ser útil em estratégias de manufatura em que a competição é baseada no uso do tempo, a TBC (time-based competition. Apresentou-se o método de medição, que inclui elementos da teoria das filas, e considerações para a simplificação de arranjos produtivos de manufatura. Para testar e refinar o método, estudou-se um caso em manufatura calçadista. Foram coletados dados de remessas e obtido, por análise estatística e simulação computacional, o comportamento da variável aleatória tempo de atravessamento de remessas. Como a medição foi inferior ao objetivo de desempenho, partiu-se para o controle, fazendo um diagnóstico que apontou efeitos indesejáveis observados que foram endereçados por ações corretivas implementadas. Para verificar a eficácia das ações, nova coleta foi feita. Desta vez, o objetivo de desempenho das entregas foi atendido, fechando um ciclo de controle. Os resultados foram discutidos, chegando-se a conclusões e alternativas de continuidade.This paper presents a method for the measurement of the lead-time and work-in-process in manufacturing. The measurement was used in an organizational control process in manufacturing, in which the controlled variate was the order lead-time. This kind of control can be useful in manufacture strategies formulated to compete in TBC (time-based competition. We presented the method, who includes elements from the queuing theory, and concerns about simplifying complex manufacturing arrays, in order to facilitate the analysis. In order to test and refine the method, we studied a case in a footwear manufacture system. We collected data from orders and, by statistical techniques and computational
Juan Luis Guerra-Tamayo
2003-01-01
.OBJECTIVE: To determine the effects of lead exposure on the time elapsed to become pregnant. MATERIAL AND METHODS: The study population consisted of 142 women residing in Mexico City between 1997 and 2001, who were already participating in a study to evaluate effects of lead exposure on reproductive health. Measurements of lead in bone were performed when women were first admitted to the program. Information on lead exposure and other variables of interest was obtained through a questionnaire. Participants were followed up to assess the relationship between the time required to become pregnant and lead exposure. Statistical analysis consisted of Kaplan-Meier estimates and Cox proportional hazards models. RESULTS: Of the total number of women in the program, 42 got pregnant: 34 before the first year of follow-up, and 8 at a later date. The mean value for lead concentration in blood was 9.3 µg/dl. The mean values for lead concentration in patella and tibia were 16.0 y 11.0 µg Pb/g of bone, respectively. Survival analysis was performed and no differences were detected in blood lead levels and time to pregnancy in the first year. Nevertheless, in women with blood lead levels above 10.0 µg/dl, the likelihood of not achieving pregnancy was five times higher (95% confidence interval [CI] 0.05-0.56 after one year of follow-up compared with women with blood lead levels below 10.0 µg/dl. CONCLUSIONS: Exposure to high lead concentrations may be an important risk factor influencing the time period for a woman to get pregnant, especially in fertile women who have tried to get pregnant for more than a year.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Li-Hao Zhang
2014-01-01
Full Text Available Radio-frequency identification (RFID, as the key technology of Internet of Things (IoT, has been hailed as a major innovation to solve misplaced inventory and reduce lead-time. Many retailers have been pushing their suppliers to invest this technology. However, its associated costs seem to prohibit its widespread application. This paper analyzes the situation of service level in a retail supply chain, which has resulted from misplaced inventory and lead-time. By newsvendor model, we analyze the difference between with- and without-RFID technologies in service level of centralized and decentralized supply chains, respectively. Then with different service levels, we determine the tag cost thresholds at which RFID technology investment becomes profitable in centralized and decentralized supply chains, respectively. Furthermore, we apply a linear transfer payment coefficient strategy to coordinate with the decentralized supply chain. It is found that whether the adoption of RFID technology improves the service level depends on the cost of RFID tag in the centralized system, but it improves the service level in the decentralized system when only the supplier bears the cost of RFID tag. Moreover, the same cost thresholds of RFID tag with different service levels exist in both the centralized and the decentralized cases.
... lead is of microscopic size, invisible to the naked eye. More often than not, children with elevated ... majority of the childhood lead poisoning cases we see today. Children and adults too can get seriously ...
2015-01-01
This first chapter presents the exploratory and curious approach to leading as relational processes – an approach that pervades the entire book. We explore leading from a perspective that emphasises the unpredictable challenges and triviality of everyday life, which we consider an interesting......, relevant and realistic way to examine leading. The chapter brings up a number of concepts and contexts as formulated by researchers within the field, and in this way seeks to construct a first understanding of relational leading....
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Carla Aparecida Cielo
2012-06-01
Full Text Available OBJETIVO: verificar e correlacionar os tempos máximos de fonação (TMF de vogais, a capacidade vital (CV e os tipos de afecções laríngeas (AL de mulheres com disfonia organofuncional (DOF. MÉTODO: pesquisa retrospectiva, transversal, exploratória, não experimental, quantitativa, com banco de dados de medidas de TMF [a, i, u], de CV e de AL de mulheres com DOF; e os testes estatísticos Qui- quadrado e exato de Fisher, para verificar as diferenças entre as variáveis e suas relações e o teste binomial, a fim de verificar a significância de proporção ou percentual da análise descritiva, com pPURPOSE: to determine and to correlate the maximum phonation times (MPT of vowels, vital capacity (VC and laryngeal disorders (LD for women with benign organic lesions resulting from vocal misuse or abuse (BOL. METHOD: retrospective, transverse, exploratory, non-experimental, quantitative study, with measurement database of MPT [a, i, u], VC and LD of women with BOL, and Chi-Square statistic and exact tests of Fisher in order to investigate the differences between the variables and their relationships and a binomial test in order to check the significance of proportion or percentage of descriptive analysis, with p<0.05. RESULTS: the majority (22; 75.86% showed MPT significantly reduced (p = 0.0053 and seven (24.14% normal MPT. The normal VC was statistically significant (p = 0.0001 (26; 89.66%, but three women (10.34% showed it to be reduced. There was significant dominance of vocal nodules (p = 0.0016 (22; 75.86%, followed by Reinke's edema (6, 20.69% and vocal polyp (1; 3.45%. Among the 22 woman (75.86% which showed reduced MPT, there was a predominance with normal VC (19; 86.36%, although no statistical significance (p = 0,558. All the individuals with normal MPT showed VC normal (7; 100%. The majority with BOL showed normal VC, although not statistically significant (p=0,199. There was a predominance of vocal nodules and reduced MPT (16; 72
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Zhang, Wen; Xu, Yiwei; Zou, Xiaobo; Wang, Ping
2017-11-15
A novel method of real-time-range measurement, characterized by pre-sampling forecast and balanced-range switching, is introduced in this study. According to this method, raw current signals in biosensing procedures may be measured and recorded with real-time-optimized instrumental settings. A low-cost and high-performance potentiostat is developed to validate the proposed method. The transient process of real-time-range measurement is investigated to optimize sampling interval and circuit parameters. Typical time consumption of a sampling cycle is less than 100μs, which makes high-speed and real-time-range measurement possible. The proposed method also brings excellent current resolution that is better than 0.8pA. It improves weak signals in stripping determinations, and is particularly suitable for biological samples. As-fabricated potentiostat, coupled to a nano-Au-modified microband electrode array, is adopted in high-speed stripping determinations towards human blood lead levels (HBLLs). Accuracy and precision of this method are validated with certified reference material (CRM). Obtained values (4.31 ± 0.18μgL(-1)) meet with certified levels of CRM (4.24 ± 0.11μgL(-1)). Coefficient of variation percent (CV %) is no more than 5.0% for intra- or inter-assay analyses. Finally, this method is utilized for human population based study. Two groups of data, from this method and inductively coupled plasma mass spectrometry (ICP-MS), are analyzed using a statistic tool of t-test, and no statistically significant difference is found. Copyright © 2017 Elsevier B.V. All rights reserved.
S. Priyan
2016-12-01
Full Text Available Normally, human based industrial decision making involves both quantitative and qualitative input factors such as ordering cost, setup cost, etc. Thus the practitioners should be careful in accounting flexibility in the input factors. This paper optimizes an inventory system by employing mathematical model with variable lead time and service level constraint (SLC in a fuzzy cost environment. Fuzziness is established by allowing the cost components imprecise and vague to certain extent. Trapezoidal and triangular fuzzy numbers are used to represent these characteristics. We also use signed distance method to defuzzify the fuzzy joint total expected cost and differential calculus optimization technique is adopted to find all optimal solutions to the model. Numerical results highlighting the sensitivity in the decision variables are also described.
The maximum intelligible range of the human voice
Boren, Braxton
This dissertation examines the acoustics of the spoken voice at high levels and the maximum number of people that could hear such a voice unamplified in the open air. In particular, it examines an early auditory experiment by Benjamin Franklin which sought to determine the maximum intelligible crowd for the Anglican preacher George Whitefield in the eighteenth century. Using Franklin's description of the experiment and a noise source on Front Street, the geometry and diffraction effects of such a noise source are examined to more precisely pinpoint Franklin's position when Whitefield's voice ceased to be intelligible. Based on historical maps, drawings, and prints, the geometry and material of Market Street is constructed as a computer model which is then used to construct an acoustic cone tracing model. Based on minimal values of the Speech Transmission Index (STI) at Franklin's position, Whitefield's on-axis Sound Pressure Level (SPL) at 1 m is determined, leading to estimates centering around 90 dBA. Recordings are carried out on trained actors and singers to determine their maximum time-averaged SPL at 1 m. This suggests that the greatest average SPL achievable by the human voice is 90-91 dBA, similar to the median estimates for Whitefield's voice. The sites of Whitefield's largest crowds are acoustically modeled based on historical evidence and maps. Based on Whitefield's SPL, the minimal STI value, and the crowd's background noise, this allows a prediction of the minimally intelligible area for each site. These yield maximum crowd estimates of 50,000 under ideal conditions, while crowds of 20,000 to 30,000 seem more reasonable when the crowd was reasonably quiet and Whitefield's voice was near 90 dBA.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
... months, and at 3, 4, 5, and 6 years of age. A blood lead level test should be done only if the risk ... recommended if the person is symptomatic at any level below 70 mcg/dL. Because lead will pass through the blood to an unborn child, pregnant ...
Selonen, Salla; Setälä, Heikki
2017-02-01
Despite the known toxicity of lead (Pb), Pb pellets are widely used at shotgun shooting ranges over the world. However, the impacts of Pb on soil nutrients and soil microbes, playing a crucial role in nutrient cycling, are poorly understood. Furthermore, it is unknown whether these impacts change with time after the cessation of shooting. To shed light on these issues, three study sites in the same coniferous forest in a shooting range area were studied: an uncontaminated control site and an active and an abandoned shooting range, both sharing a similar Pb pellet load in the soil, but the latter with a 20-year longer contamination history. Soil pH and nitrate concentration increased, whilst soil phosphate concentration and fungal phospholipid fatty acid (PLFA) decreased due to Pb contamination. Our results imply that shooting-derived Pb can influence soil nutrients and microbes not only directly but also indirectly by increasing soil pH. However, these mechanisms cannot be differentiated here. Many of the Pb-induced changes were most pronounced at the abandoned range, and nutrient leaching was increased only at that site. These results suggest that Pb disturbs the structure and functions of the soil system and impairs a crucial ecosystem service, the ability to retain nutrients. Furthermore, the risks of shooting-derived Pb to the environment increase with time.
Moacir Godinho Filho
2010-01-01
Full Text Available No atual ambiente competitivo, a tomada de decisão tem se tornado uma tarefa cada vez mais complicada, envolvendo inúmeras variáveis, e suas relações, nem sempre claramente entendidas. Uma dessas relações, foco deste artigo, é a relação entre tamanho de lote de produção e lead time médio. Essa relação é amplamente conhecida na literatura específica sobre teoria de filas, porém o mesmo não acontece na prática em gestão de operações. Este artigo trata deste assunto, objetivando apresentar e comparar o efeito de seis programas de melhoria contínua em variáveis do chão de fábrica (variabilidade da chegada das ordens, variabilidade do processo, taxa de defeito, tempo até a falha, tempo de reparo e tempo de set up na relação tamanho de lote × lead time em um ambiente de máquina única que fabrica múltiplos produtos. Isto é feito por meio de uma combinação das abordagens System Dynamics (Forrester, 1962 e Factory Physics (Hopp; Spearman, 2008. Dois conjuntos de experimentos são realizados: i Uma melhoria de grandes proporções (50% em cada uma das variáveis separadamente, como aquela que seria obtida por um grande investimento; ii Uma pequena melhoria em todas as variáveis simultaneamente. Os resultados mostraram: (a o efeito positivo de programas de melhoria contínua em variáveis do chão de fábrica no lead time; (b a importância de se conhecer a curva tamanho de lote × lead time e o papel da redução de set up antes de se iniciarem programas de redução de tamanhos de lote; (c que investir em pequenas melhorias em muitas variáveis de forma simultânea é uma política melhor, com relação ao lead time, do que realizar uma grande melhoria em somente uma variável; (d algumas contribuições para um melhor entendimento de modernos paradigmas de gestão da produção, tais como Lean Manufaturing e Quick Response Manufacturing.Many modern manufacturing management approaches present relationships between
Ikeya, Tomohiko; Mita, Yuichi; Ishihara, Kaoru [Central Research Inst. of Electric Power Industry, Tokyo (Japan); Sawada, Nobuyuki [Hokkaido Electric Power, Sapporo (Japan); Takagi, Sakae; Murakami, Jun-ichi [Tohoku Electric Power, Sendai (Japan); Kobayashi, Kazuyuki [Tokyo Electric Power, Yokohama (Japan); Sakabe, Tetsuya [Chubu Electric Power, Nagoya (Japan); Kousaka, Eiichi [Hokuriku Electric Power, Toyama (Japan); Yoshioka, Haruki [The Kansai Electric Power, Osaka (Japan); Kato, Satoru [The Chugoku Electric Power, Hiroshima (Japan); Yamashita, Masanori [Shikoku Research Inst., Takamatsu (Japan); Narisoko, Hayato [The Okinawa Electric Power, Naha (Japan); Nishiyama, Kazuo [The Central Electric Power Council, Tokyo (Japan); Adachi, Kazuyuki [Kyushu Electric Power, Fukuoka (Japan)
1998-09-01
For the popularization of electric vehicles (EVs), the conditions for charging EV batteries with available current patterns should allow complete charging in a short time, i.e., less than 5 to 8 h. Therefore, in this study, a new charging condition is investigated for the EV valve-regulated lead/acid battery system, which should allow complete charging of EV battery systems with multi-step constant currents in a much shorter time with longer cycle life and higher energy efficiency compared with two-step constant-current charging. Although a high magnitude of the first current in the two-step constant-current method prolongs cycle life by suppressing the softening of positive active material, too large a charging current magnitude degrades cells due to excess internal evolution of heat. A charging current magnitude of approximately 0.5 C is expected to prolong cycle life further. Three-step charging could also increase the magnitude of charging current in the first step without shortening cycle life. Four-or six-step constant-current methods could shorten the charging time to less than 5 h, as well as yield higher energy efficiency and enhanced cycle life of over 400 cycles compared with two-step charging with the first step current of 0.5 C. Investigation of the degradation mechanism of the batteries revealed that the conditions of multi-step constant-current charging suppressed softening of positive active material and sulfation of negative active material, but, unfortunately, advanced the corrosion of the grids in the positive plates. By adopting improved grids and cooling of the battery system, the multistep constant-current method may enhance the cycle life. (orig.)
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
Kwon Eun-Young
2012-09-01
Full Text Available Abstract Background Visceral white adipose tissue (WAT hypertrophy, adipokine production, inflammation and fibrosis are strongly associated with obesity, but the time-course of these changes in-vivo are not fully understood. Therefore, the aim of this study was to establish the time-course of changes in adipocyte morphology, adipokines and the global transcriptional landscape in visceral WAT during the development of diet-induced obesity. Results C57BL/6 J mice were fed a high-fat diet (HFD or normal diet (ND and sacrificed at 8 time-points over 24 weeks. Excessive fat accumulation was evident in visceral WAT depots (Epidydimal, Perirenal, Retroperitoneum, Mesentery after 2–4 weeks. Fibrillar collagen accumulation was evident in epidydimal adipocytes at 24 weeks. Plasma adipokines, leptin, resistin and adipsin, increased early and time-dependently, while adiponectin decreased late after 20 weeks. Only plasma leptin and adiponectin levels were associated with their respective mRNA levels in visceral WAT. Time-course microarrays revealed early and sustained activation of the immune transcriptome in epididymal and mesenteric depots. Up-regulated inflammatory genes included pro-inflammatory cytokines, chemokines (Tnf, Il1rn, Saa3, Emr1, Adam8, Itgam, Ccl2, 3, 4, 6, 7 and 9 and their upstream signalling pathway genes (multiple Toll-like receptors, Irf5 and Cd14. Early changes also occurred in fibrosis, extracellular matrix, collagen and cathepsin related-genes, but histological fibrosis was only visible in the later stages. Conclusions In diet-induced obesity, early activation of TLR-mediated inflammatory signalling cascades by CD antigen genes, leads to increased expression of pro-inflammatory cytokines and chemokines, resulting in chronic low-grade inflammation. Early changes in collagen genes may trigger the accumulation of ECM components, promoting fibrosis in the later stages of diet-induced obesity. New therapeutic approaches
Kwon, Eun-Young; Shin, Su-Kyung; Cho, Yun-Young; Jung, Un Ju; Kim, Eunjung; Park, Taesun; Park, Jung Han Yoon; Yun, Jong Won; McGregor, Robin A; Park, Yong Bok; Choi, Myung-Sook
2012-09-04
Visceral white adipose tissue (WAT) hypertrophy, adipokine production, inflammation and fibrosis are strongly associated with obesity, but the time-course of these changes in-vivo are not fully understood. Therefore, the aim of this study was to establish the time-course of changes in adipocyte morphology, adipokines and the global transcriptional landscape in visceral WAT during the development of diet-induced obesity. C57BL/6 J mice were fed a high-fat diet (HFD) or normal diet (ND) and sacrificed at 8 time-points over 24 weeks. Excessive fat accumulation was evident in visceral WAT depots (Epidydimal, Perirenal, Retroperitoneum, Mesentery) after 2-4 weeks. Fibrillar collagen accumulation was evident in epidydimal adipocytes at 24 weeks. Plasma adipokines, leptin, resistin and adipsin, increased early and time-dependently, while adiponectin decreased late after 20 weeks. Only plasma leptin and adiponectin levels were associated with their respective mRNA levels in visceral WAT. Time-course microarrays revealed early and sustained activation of the immune transcriptome in epididymal and mesenteric depots. Up-regulated inflammatory genes included pro-inflammatory cytokines, chemokines (Tnf, Il1rn, Saa3, Emr1, Adam8, Itgam, Ccl2, 3, 4, 6, 7 and 9) and their upstream signalling pathway genes (multiple Toll-like receptors, Irf5 and Cd14). Early changes also occurred in fibrosis, extracellular matrix, collagen and cathepsin related-genes, but histological fibrosis was only visible in the later stages. In diet-induced obesity, early activation of TLR-mediated inflammatory signalling cascades by CD antigen genes, leads to increased expression of pro-inflammatory cytokines and chemokines, resulting in chronic low-grade inflammation. Early changes in collagen genes may trigger the accumulation of ECM components, promoting fibrosis in the later stages of diet-induced obesity. New therapeutic approaches targeting visceral adipose tissue genes altered early by HFD
... Topics Environment & Health Healthy Living Pollution Reduce, Reuse, Recycle Science – How It Works The Natural World Games ... OTHERS: Lead has recently been found in some plastic mini-blinds and vertical blinds which were made ...
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Maximum caliber inference and the stochastic Ising model
Cafaro, Carlo; Ali, Sean Alan
2016-11-01
We investigate the maximum caliber variational principle as an inference algorithm used to predict dynamical properties of complex nonequilibrium, stationary, statistical systems in the presence of incomplete information. Specifically, we maximize the path entropy over discrete time step trajectories subject to normalization, stationarity, and detailed balance constraints together with a path-dependent dynamical information constraint reflecting a given average global behavior of the complex system. A general expression for the transition probability values associated with the stationary random Markov processes describing the nonequilibrium stationary system is computed. By virtue of our analysis, we uncover that a convenient choice of the dynamical information constraint together with a perturbative asymptotic expansion with respect to its corresponding Lagrange multiplier of the general expression for the transition probability leads to a formal overlap with the well-known Glauber hyperbolic tangent rule for the transition probability for the stochastic Ising model in the limit of very high temperatures of the heat reservoir.
Carey, Gemma
2014-09-01
Evidence now shows that the key drivers of poor health are social factors, such as education, employment, housing and urban environments. Variations in these social factors-or the conditions in which we live our lives-have lead to a growth in health inequalities within and between countries. One of the key challenges facing those concerned with health equity is how to effect change across the broad policy areas that impact these social.
Maico Roris Severino
2010-12-01
Full Text Available O objetivo do presente trabalho é apresentar uma proposta para a utilização do sistema PBC (Period Batch Control, assim como de dois pré-requisitos, a saber: mudança na pol��tica de controle da qualidade e criação de uma célula virtual, para reduzir o lead time em uma empresa de bens de capital. O método de pesquisa utilizado é o estudo de caso. Resultados previstos de tal implantação indicam que a empresa estudada poderá obter vantagens significativas com a adoção do PBC como, por exemplo, redução de 46,42% no lead time do produto estudado, diminuição de custos de WIP (Work In Process em aproximadamente 50% e redução de 31% no tempo improdutivo. Esses resultados promissores podem ser alcançados também por empresas com características semelhantes que sigam a proposta apresentada neste artigo. Academicamente, o presente trabalho contribui para aumentar a divulgação do sistema PBC, assunto carente de pesquisas no Brasil.The goal of this work is to present a proposal for the use of the PBC (Period Batch Control, as well as two prerequisites, namely, to change the policy of quality control and creation of a virtual cell, to reduce the lead time in a company which produces capital goods. Case study research is the methodology used in this paper. Expected results of this implementation indicate that the company can obtain significant benefits by the adoption of PBC, such as 46.42% lead time reduction, reduction on WIP (Work In Process costs in approximately 51.56% and unproductive time in 31%. These promising results can be achieved by companies with similar characteristics. Academically, this work contribute to increase the dissemination of the PBC, a subject in need of research in Brazil.
Garritty, Chantelle; Stevens, Adrienne; Gartlehner, Gerald; King, Valerie; Kamel, Chris
2016-10-28
Policymakers and healthcare stakeholders are increasingly seeking evidence to inform the policymaking process, and often use existing or commissioned systematic reviews to inform decisions. However, the methodologies that make systematic reviews authoritative take time, typically 1 to 2 years to complete. Outside the traditional SR timeline, "rapid reviews" have emerged as an efficient tool to get evidence to decision-makers more quickly. However, the use of rapid reviews does present challenges. To date, there has been limited published empirical information about this approach to compiling evidence. Thus, it remains a poorly understood and ill-defined set of diverse methodologies with various labels. In recent years, the need to further explore rapid review methods, characteristics, and their use has been recognized by a growing network of healthcare researchers, policymakers, and organizations, several with ties to Cochrane, which is recognized as representing an international gold standard for high-quality, systematic reviews. In this commentary, we introduce the newly established Cochrane Rapid Reviews Methods Group developed to play a leading role in guiding the production of rapid reviews given they are increasingly employed as a research synthesis tool to support timely evidence-informed decision-making. We discuss how the group was formed and outline the group's structure and remit. We also discuss the need to establish a more robust evidence base for rapid reviews in the published literature, and the importance of promoting registration of rapid review protocols in an effort to promote efficiency and transparency in research. As with standard systematic reviews, the core principles of evidence-based synthesis should apply to rapid reviews in order to minimize bias to the extent possible. The Cochrane Rapid Reviews Methods Group will serve to establish a network of rapid review stakeholders and provide a forum for discussion and training. By facilitating
Chun-Sun Gu
2014-01-01
Full Text Available Quantitative real time PCR (RT-qPCR has emerged as an accurate and sensitive method to measure the gene expression. However, obtaining reliable result depends on the selection of reference genes which normalize differences among samples. In this study, we assessed the expression stability of seven reference genes, namely, ubiquitin-protein ligase UBC9 (UBC, tubulin alpha-5 (TUBLIN, eukaryotic translation initiation factor (EIF-5A, translation elongation factor EF1A (EF1α, translation elongation factor EF1B (EF1b, actin11 (ACTIN, and histone H3 (HIS, in Iris. lactea var. chinensis (I. lactea var. chinensis root when the plants were subjected to cadmium (Cd, lead (Pb, and salt stress conditions. All seven reference genes showed a relatively wide range of threshold cycles (Ct values in different samples. GeNorm and NormFinder algorithms were used to assess the suitable reference genes. The results from the two software units showed that EIF-5A and UBC were the most stable reference genes across all of the tested samples, while TUBLIN was unsuitable as internal controls. I. lactea var. chinensis is tolerant to Cd, Pb, and salt. Our results will benefit future research on gene expression in response to the three abiotic stresses.
Gu, Chun-Sun; Liu, Liang-qin; Xu, Chen; Zhao, Yan-hai; Zhu, Xu-dong; Huang, Su-Zhen
2014-01-01
Quantitative real time PCR (RT-qPCR) has emerged as an accurate and sensitive method to measure the gene expression. However, obtaining reliable result depends on the selection of reference genes which normalize differences among samples. In this study, we assessed the expression stability of seven reference genes, namely, ubiquitin-protein ligase UBC9 (UBC), tubulin alpha-5 (TUBLIN), eukaryotic translation initiation factor (EIF-5A), translation elongation factor EF1A (EF1 α ), translation elongation factor EF1B (EF1b), actin11 (ACTIN), and histone H3 (HIS), in Iris. lactea var. chinensis (I. lactea var. chinensis) root when the plants were subjected to cadmium (Cd), lead (Pb), and salt stress conditions. All seven reference genes showed a relatively wide range of threshold cycles (C t ) values in different samples. GeNorm and NormFinder algorithms were used to assess the suitable reference genes. The results from the two software units showed that EIF-5A and UBC were the most stable reference genes across all of the tested samples, while TUBLIN was unsuitable as internal controls. I. lactea var. chinensis is tolerant to Cd, Pb, and salt. Our results will benefit future research on gene expression in response to the three abiotic stresses.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Scheuhammer, A.M.; Beyer, W.N.; Schmitt, C.J.; Jorgensen, Sven Erik; Fath, Brian D.
2008-01-01
Lead (Pb) is a naturally occurring metallic element; trace concentrations are found in all environmental media and in all living things. However, certain human activities, especially base metal mining and smelting; combustion of leaded gasoline; the use of Pb in hunting, target shooting, and recreational angling; the use of Pb-based paints; and the uncontrolled disposal of Pb-containing products such as old vehicle batteries and electronic devices have resulted in increased environmental levels of Pb, and have created risks for Pb exposure and toxicity in invertebrates, fish, and wildlife in some ecosystems.
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
Molenaar, P.C.M.; Nesselroade, J.R.
1998-01-01
The study of intraindividual variability pervades empirical inquiry in virtually all subdisciplines of psychology. The statistical analysis of multivariate time-series data - a central product of intraindividual investigations - requires special modeling techniques. The dynamic factor model (DFM), w
Gemma Carey
2014-09-01
Full Text Available Evidence now shows that the key drivers of poor health are social factors, such as education, employment, housing and urban environments. Variations in these social factors—or the conditions in which we live our lives—have lead to a growth in health inequalities within and between countries. One of the key challenges facing those concerned with health equity is how to effect change across the broad policy areas that impact these social conditions, and create a robust ‘social protections framework’ to address and prevent health inequalities.
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Superfast maximum-likelihood reconstruction for quantum tomography
Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon
2017-06-01
Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.
Bekker-Nielsen, Tønnes
2016-01-01
Through a systematic comparison of c. 50 careers leading to the koinarchate or high priesthood of Asia, Bithynia, Galatia, Lycia, Macedonia and coastal Pontus, as described in funeral or honorary inscriptions of individual koinarchs, it is possible to identify common denominators but also...
1974-01-01
One of the 150 lead grids used in the multiwire proportional chamber g-ray detector. The 0.75 mm diameter holes are spaced 1 mm centre to centre. The grids were made by chemical cutting techniques in the Godet Workshop of the SB Physics.
Anderson, D W; Mettil, W; Schneider, J S
2016-03-30
Lead (Pb) exposure during development impairs a variety of cognitive, behavioral and neurochemical processes resulting in deficits in learning, memory, attention, impulsivity and executive function. Numerous studies have attempted to model this effect of Pb in rodents, with the majority of studies focusing on hippocampus-associated spatial learning and memory processes. Using a different paradigm, trace fear conditioning, a process requiring coordinated integration of both the medial prefrontal cortex and the hippocampus, we have assessed the effects of Pb exposure on associative learning and memory. The present study examined both female and male long evans rats exposed to three environmentally relevant levels of Pb (150 ppm, 375 ppm and 750 ppm) during different developmental periods: perinatal (PERI; gestation-postnatal day 21), early postnatal (EPN; postnatal days 1-21) and late postnatal (LPN; postnatal days 1-55). Testing began at postnatal day 55 and consisted of a single day of acquisition training, and three post training time points (1, 2 and 10 days) to assess memory consolidation and recall. All animals, regardless of sex, developmental window or level of Pb-exposure, successfully acquired conditioned-unconditioned stimulus association during training. However, there were significant effects of Pb-exposure on consolidation and memory recall at days 1-10 post training. In females, EPN and LPN exposure to 150 ppm Pb (but not PERI exposure) significantly impaired recall. In contrast, only PERI 150 ppm and 750 ppm-exposed males had significant recall deficits. These data suggest a complex interaction between sex, developmental window of exposure and Pb-exposure level on consolidation and recall of associative memories.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Gonzalez-Ayala, Julian; Calvo Hernández, A.; Roco, J. M. M.
2016-07-01
The main unified energetic properties of low dissipation heat engines and refrigerator engines allow for both endoreversible or irreversible configurations. This is accomplished by means of the constraints imposed on the characteristic global operation time or the contact times between the working system with the external heat baths and modulated by the dissipation symmetries. A suited unified figure of merit (which becomes power output for heat engines) is analyzed and the influence of the symmetries on the optimum performance discussed. The obtained results, independent on any heat transfer law, are faced with those obtained from Carnot-like heat models where specific heat transfer laws are needed. Thus, it is shown that only the inverse phenomenological law, often used in linear irreversible thermodynamics, correctly reproduces all optimized values for both the efficiency and coefficient of performance values.
Who Leads China's Leading Universities?
Huang, Futao
2017-01-01
This study attempts to identify the major characteristics of two different groups of institutional leaders in China's leading universities. The study begins with a review of relevant literature and theory. Then, there is a brief introduction to the selection of party secretaries, deputy secretaries, presidents and vice presidents in leading…
Distribution of maximum loss of fractional Brownian motion with drift
Çağlar, Mine; Vardar-Acar, Ceren
2013-01-01
In this paper, we find bounds on the distribution of the maximum loss of fractional Brownian motion with H >= 1/2 and derive estimates on its tail probability. Asymptotically, the tail of the distribution of maximum loss over [0, t] behaves like the tail of the marginal distribution at time t.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Brabender von Lossberg, I.
1982-05-25
The lead content of human bones of 129 individuals from different prehistoric and historical populations of Bavaria and Peru was determined by means of flameless absorption spectroscopy. By the same method the lead content of human tissue was determined in 12 Peruvian mummy heads, and that of the hair in two individuals. The results were compared with each other and with those of other investigations on prehistoric, historical and modern populations and were interpreted. The comparative investigation confirmed the assumption that intensive exposure to lead in a population leads to higher lead concentration in the bones. The lead concentration in the bones of the late mediaeval series of Niedermuenster 4 was comparable to the one in the bones of modern populations and amounted to about 10 times the average one in prehistoric Peruvians. The lead accumulation was not found to be sex-dependent. The chronic lead intoxication of the late Middle Ages described by the history of medicine was confirmed by the relatively high lead contents in bones of Niedermuenster 4. In the framework of an epidemiological survey the problem of chronic lead intoxication was traced back to pre and early history.
Blood lead levels ... A blood sample is needed. Most of the time blood is drawn from a vein located on the inside ... may be used to puncture the skin. The blood collects in a small glass tube called a ...
Pogner, Karl-Heinz
2017-01-01
and technical engineering; Smart Cities) is very prominent in the traditional mass media discourse, in PR / PA of tech companies and traditional municipal administrations; whereas the second one (participation; Livable Cities) is mostly enacted in social media, (local) initiatives, movements, (virtual......) communities, new forms of urban governance in municipal administration and co-competitive city networks. Both forms seem to struggle for getting voice and power in the discourses, negotiations, struggles, and conflicts in Urban Governance about the question how to manage or lead (in) a city. Talking about...
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
EFFECTS OF LEAD WIDTHS AND PITCHES ON RELIABILITY OF QUAD FLAT PACKAGE (QFP) SOLDERED JOINTS
XUE Songbai; WU Yuxiu; HAN Zongjie; WANG Jianxin
2007-01-01
The finite element method(FEM) is used to analyze the effects of lead widths and pitches on reliability of soldered joints. The optimum Simulation for QFP devices is also researched. The results indicate that when the lead pitches are the same, the maximum equivalent stress of the soldered joints increases with the increasing of lead widths, while the reliability of the soldered joints reduces. When the lead widths are the same, the maximum equivalent stress of the soldered joints doesn't decrease completely with the increasing of lead pitches, a minimum value of the maximum equivalent stress values exists in all the curves. Under this condition the maximum equivalent stress of the soldered joints is relatively the least, the reliability of soldered joints is high and the assembly is excellent. The simulating results indicate the best parameter: The lead width is 0.2 mm and lead pitch is 0.3 mm (the distance between two leads is 0.1 mm), which are benefited for the micromation of QFP devices now. The minimum value of the maximum equivalent stress of soldered joints exists while lead width is 0.25 mm and lead pitch is 0.35 mm (the distance between two leads is 0.1 mm), the devices can serve for a long time and the reliability is the highest, the assembly is excellent. The simulating results also indicate the fact that the lead width is 0.15 mm and lead pitch is 0.2 mm maybe the limit of QFP, which is significant for the high lead count and micromation of assembly.
Omel'chuk, S T; Aleksiĭchuk, V D; Sokurenko, L M
2014-01-01
Biochemical studies revealed that alanine aminotransferase levels changing first during short action (30 injections) of lead sulfide nanoparticles of size 10 and 30 nm, and the ionic form of a 400 nm lead while the growth of both enzymes (aspartate aminotransferase and alanine aminotransferase) activity during long-term exposure (60 injections) is the same intensity. It it confirmed by the value of de Ritis coefficient, which is statistically the same as control. Morphological studies also confirm these data--degenerative changes of hepatocytes, reactive changes of the stroma and vascular responses were detected. It is shown that the severity of metabolic and morphological damages in the liver increased with prolonging the duration of lead nanoparticles intake.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
2011-01-01
Three of the LHC experiments - ALICE, ATLAS and CMS - will be studying the upcoming heavy-ion collisions. Given the excellent results from the short heavy-ion run last year, expectations have grown even higher in experiment control centres. Here they discuss their plans: ALICE For the upcoming heavy-ion run, the ALICE physics programme will take advantage of a substantial increase of the LHC luminosity with respect to last year’s heavy-ion run. The emphasis will be on the acquisition of rarely produced signals by implementing selective triggers. This is a different operation mode to that used during the first low luminosity heavy-ion run in 2010, when only minimum-bias triggered events were collected. In addition, ALICE will benefit from increased acceptance coverage by the electromagnetic calorimeter and the transition radiation detector. In order to double the amount of recorded events, ALICE will exploit the maximum available bandwidth for mass storage at 4 GB/s and t...
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Minimum length-maximum velocity
Panes, Boris
2012-03-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example, we can predict the ratio between the minimum lengths in space and time using the results from OPERA on superluminal neutrinos.
A Family of Maximum SNR Filters for Noise Reduction
Huang, Gongping; Benesty, Jacob; Long, Tao;
2014-01-01
This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....
Sequential and Parallel Algorithms for Finding a Maximum Convex Polygon
Fischer, Paul
1997-01-01
such a polygon which is maximal with respect to area can be found in time O(n³ log n). With the same running time one can also find such a polygon which contains a maximum number of positive points. If, in addition, the number of vertices of the polygon is restricted to be at most M, then the running time...
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Mateo, R; Green, A J; Lefranc, H; Baos, R; Figuerola, J
2007-01-01
We studied lead (Pb) shot contamination in sediments from the Guadalquivir marshes and six other closed-basin lagoons in Southern Spain that are of major importance for threatened species of waterbirds. Shot densities were relatively low in Doñana, ranging from 0 to 25 shot/m(2) in the top 10 cm of sediments. The density at Medina lagoon (Ramsar site) was 148 shot/m(2), making it the most contaminated wetland known in Europe. Densities in the other five lagoons ranged from 9 to 59 shot/m(2). We studied the prevalence of ingested Pb shot in waterbirds from Doñana and found a lower prevalence in ducks than previously recorded in other Spanish wetlands. Lead shot were also found embedded in tissues of some waterbirds, proving that protected species such as the greater flamingo (Phoenicopterus ruber) and the glossy ibis (Plegadis falcinellus) are subjected to illegal hunting. The prevalence of embedded shot for geese was especially high (44% for trapped birds). Lead shot were detected in 2.8% of the pellets of the Spanish imperial eagle (Aquila adalberti) which usually preys on geese. We found that the prevalence of ingested Pb shot in geese and in Spanish imperial eagles has significantly decreased in recent years, possibly due to restrictions on hunting activity, efforts to remove shot from a sand dune used by geese to obtain grit, and to the high rainfall in Doñana during the last years that permitted waterfowl to stay more within the protected areas.
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Morando, Alberto; Victor, Trent; Dozza, Marco
2016-12-01
Adaptive Cruise Control (ACC) has been shown to reduce the exposure to critical situations by maintaining a safe speed and headway. It has also been shown that drivers adapt their visual behavior in response to the driving task demand with ACC, anticipating an impending lead vehicle conflict by directing their eyes to the forward path before a situation becomes critical. The purpose of this paper is to identify the causes related to this anticipatory mechanism, by investigating drivers' visual behavior while driving with ACC when a potential critical situation is encountered, identified as a forward collision warning (FCW) onset (including false positive warnings). This paper discusses how sensory cues capture attention to the forward path in anticipation of the FCW onset. The analysis used the naturalistic database EuroFOT to examine visual behavior with respect to two manually-coded metrics, glance location and glance eccentricity, and then related the findings to vehicle data (such as speed, acceleration, and radar information). Three sensory cues (longitudinal deceleration, looming, and brake lights) were found to be relevant for capturing driver attention and increase glances to the forward path in anticipation of the threat; the deceleration cue seems to be dominant. The results also show that the FCW acts as an effective attention-orienting mechanism when no threat anticipation is present. These findings, relevant to the study of automation, provide additional information about drivers' response to potential lead-vehicle conflicts when longitudinal control is automated. Moreover, these results suggest that sensory cues are important for alerting drivers to an impending critical situation, allowing for a prompt reaction.
Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules
Gao, Junling; Chen, Min
2013-01-01
Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...
Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization
Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue
2010-01-01
This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...
Janousek, Martin
2010-05-01
Real-Time CORS (Continuously Operating Reference Station Networks) today are typically GNSS networks for positioning and monitoring purposes. Real-Time networks can consist of a few stations for a local network up to nation- or continental wide networks with several hundred CORS stations. Such networks use wide area modeling of GNSS error sources including ionospheric, tropospheric and satellite orbit correction parameters to produce highest precision and efficiency method of positioning using GNSS. In 1998 Trimble Navigation Ltd. introduced a method of surveying with a non-physical or computed base station, called VRS (Virtual Reference Station). It is the most widely supported method of producing a network solution for precise carrier phase positioning in the industry. Surveying historically required one base as the fixed point of reference, and one or multiple rovers using that point of reference to compute their location by processing a vector result, either in real-time or in a postprocessed sense. Real-time survey is often referred to as RTK, short for real-time kinematic, and as the name suggests the results are in real time and you can move. The power of VRS is in the ability to compute a real-time wide-area solution to the factors that cause single base methods to degrade with distance. Namely, ionospheric and tropospheric modeling, and satellite orbit corrections. This is achieved by the reference network of CORS. A wide scattering of CORS across a state, typically 50-70km in mid-latitudes, creates a ground based sampling which significantly reduces the distance dependent errors that accumulate in the single base-rover relationship described early. Furthermore, GNSS networks can be used for real-time monitoring purposes at various distance range. Trimble Integrity Manager software provides a suite of motion engines designed to detect and quantify any movement in a range of scales from slow, creeping movement like subsidence, through sudden events such as
Maximum-Likelihood Approach to Topological Charge Fluctuations in Lattice Gauge Theory
Brower, R C; Fleming, G T; Lin, M F; Neil, E T; Osborn, J C; Rebbi, C; Rinaldi, E; Schaich, D; Schroeder, C; Voronov, G; Vranas, P; Weinberg, E; Witzel, O
2014-01-01
We present a novel technique for the determination of the topological susceptibility (related to the variance of the distribution of global topological charge) from lattice gauge theory simulations, based on maximum-likelihood analysis of the Markov-chain Monte Carlo time series. This technique is expected to be particularly useful in situations where relatively few tunneling events are observed. Restriction to a lattice subvolume on which topological charge is not quantized is explored, and may lead to further improvement when the global topology is poorly sampled. We test our proposed method on a set of lattice data, and compare it to traditional methods.
Ringers, J; Haanstra, KG; Kroczek, RA; Kliem, K; Kuhn, EM; Wubben, J; Ossevoort, MA; Volk, HD; Jonker, M
2002-01-01
Background. In rodents it has been demonstrated that blockade of the CD40-CD154 (CD40L) pathway at the time of donor-specific blood transfusion (DST) can result in indefinite graft survival. Because it has been reported in the past that DST in monkeys can have a favorable effect on graft outcome and
Boorse, H.A.; Cook, D.B.; Zemansky, W.M.
1950-06-01
Numerous determinations of the zero-field transition temperature of lead have been made. All of these observations except that of Daunt were made by the direct measurement of electrical resistance. Daunt`s method involved the shielding effect of persistent currents in a hollow cylinder. In the authors work on columbium to be described in a forthcoming paper an a.c. induction method was used for the measurement of superconducting transitions. The superconductor was mounted as a cylindrical core of a coil which functioned as the secondary of a mutual inductance. The primary coil was actuated by an oscillator which provided a maximum a.c. field within the secondary of 1.5 oersteds at a frequency of 1000 cycles per second. The secondary e.m.f. which was dependent for its magnitude on the permeability of the core was amplified, rectifie, and observed on a recording potentiometer. During the application of this method to the study of columbium it appeared that a further check on the zero-field transition temperature of lead would be worth while especially if agreement between results for very pure samples could be obtained using this method. Such result would help in establishing the lead transition temperature as a reasonably reproducible reference point in the region between 4 deg and 10 deg K.
朱莉; 王海燕; 赵林度
2005-01-01
根据总成本最小原则,在考虑需求提前期影响的情况下,建立了基于随机提前期的三级药品库存优化模型和基于常值提前期的二级药品库存优化模型.运用Matlab软件研究了提前期与库存成本的关系,说明了提前期的变化对药品库存系统有着重要的影响.通过数据实例借助Lingo软件对2个模型进行了仿真计算和比较,结果表明,在保证服务水平的前提下,具有提前期的二级药品库存的总成本比三级药品库存的总成本有明显降低.%According to the principle of minimizing total cost, the three-echelon optimized medical inventory model with stochastic lead-time and two-echelon optimized medicine inventory model with fixed lead-time are established. The relationship between lead-time and inventory cost is studied by Matlab software. It shows that the variety of lead-time has an important effect on medicine inventory systems. Numerical simulation and sensitivity analysis of two models are presented by Lingo software. Based on analysis, it is concluded that the two-echelon model with lead-time results in inventory cost savings, and keeps the quality of care as reflected in service levels when compared with the three-echelon network structure.
Diagboya, Paul N; Olu-Owolabi, Bamidele I; Adebowale, Kayode O
2015-07-01
In order to predict the bioavailability of toxic metals in soils undergoing degradation of organic matter (OM) and iron oxides (IOs), it is vital to understand the roles of these soil components in relation to metal retention and redistribution with time. In this present work, batch competitive sorptions of Pb(II), Cu(II), and Cd(II) were investigated between 1 and 90 days. Results showed that competition affected Cd(II) sorption more than Cu(II) and Pb(II). The sorption followed the trend Pb(II) > > Cu(II) > Cd(II), irrespective of aging, and this high preference for Pb(II) ions in soils reduced with time. Removal of OM led to reduction in distribution coefficient (K d) values of ≈33% for all cations within the first day. However, K d increased nearly 100% after 7 days and over 1000% after 90-day period. The enhanced K d values indicated that sorptions occurred on the long run on surfaces which were masked by OM. Removal of IO caused selective increases in the K d values, but this was dependent on the dominant soil constituent(s) in the absence of IO. The K d values of the IO-degraded samples nearly remained constant irrespective of aging indicating that sorptions on soil components other than the IO are nearly instantaneous while iron oxides played greater role than other constituents with time. Hence, in the soils studied, organic matter content determines the immediate relative metal retention while iron oxides determine the redistribution of metals with time.
Bardia Varastehmoradi
2013-09-01
Full Text Available Objective(s: Electromagnetic radiations which have lethal effects on the living cells are currently also considered as a disinfective physical agent. Materials and Methods: In this investigation, silver nanoparticles were applied to enhance the lethal action of low powers (100 and 180 W of 2450 MHZ electromagnetic radiation especially against Escherichia coli ATCC 8739. Silver nanoparticles were biologically prepared and used for next experiments. Sterile normal saline solution was prepared and supplemented by silver nanoparticles to reach the sub-inhibitory concentration (6.25 μg/mL. Such diluted silver colloid as well as free-silver nanoparticles solution was inoculated along with test microorganisms, particularly E. coli. These suspensions were separately treated by 2450 MHz electromagnetic radiation for different time intervals in a microwave oven operated at low powers (100 W and 180 W. The viable counts of bacteria before and after each radiation time were determined by colony-forming unit (CFU method. Results: Results showed that the addition of silver nanoparticles significantly decreased the required radiation time to kill vegetative forms of microorganisms. However, these nanoparticles had no combined effect with low power electromagnetic radiation when used against Bacillus subtilis spores. Conclusion: The cumulative effect of silver nanoparticles and low powers electromagnetic radiation may be useful in medical centers to reduce contamination in polluted derange and liquid wastes materials and some devices.
Lead telluride alloy thermoelectrics
Aaron D. LaLonde
2011-11-01
Full Text Available The opportunity to use solid-state thermoelectrics for waste heat recovery has reinvigorated the field of thermoelectrics in tackling the challenges of energy sustainability. While thermoelectric generators have decades of proven reliability in space, from the 1960s to the present, terrestrial uses have so far been limited to niche applications on Earth because of a relatively low material efficiency. Lead telluride alloys were some of the first materials investigated and commercialized for generators but their full potential for thermoelectrics has only recently been revealed to be far greater than commonly believed. By reviewing some of the past and present successes of PbTe as a thermoelectric material we identify the issues for achieving maximum performance and successful band structure engineering strategies for further improvements that can be applied to other thermoelectric materials systems.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Afonso, S. M.; Bonotto, E. M.; Federson, M.; Schwabik, Š.
2011-04-01
In this paper, we consider an initial value problem for a class of generalized ODEs, also known as Kurzweil equations, and we prove the existence of a local semidynamical system there. Under certain perturbation conditions, we also show that this class of generalized ODEs admits a discontinuous semiflow which we shall refer to as an impulsive semidynamical system. As a consequence, we obtain LaSalle's invariance principle for such a class of generalized ODEs. Due to the importance of LaSalle's invariance principle in studying stability of differential systems, we include an application to autonomous ordinary differential systems with impulse action at variable times.
Abokifa, Ahmed A; Biswas, Pratim
2017-03-07
Partial replacement of lead service lines (LSLs) often results in the excessive long-term release of lead particulates due to the disturbance of pipe scale and galvanic corrosion. In this study, a modeling approach to simulate the release and transport of particulate and dissolved lead from full and partially replaced LSLs is developed. A mass-transfer model is coupled with a stochastic residential water demand generator to investigate the effect of normal household usage flow patterns on lead exposure. The model is calibrated by comparing simulation results against experimental measurements from pilot-scale setups where lead release under different flow rates and water chemistry scenarios was reported. Applying the model within a Monte Carlo simulation framework, partial replacement of the LSL was predicted to result in releasing spikes with significantly high concentrations of particulate lead (1011.9 ± 290.3 μg/L) that were five times higher than those released from the simulated full LSL. Sensitivity analysis revealed that the intensity of flow demands significantly affects particulate lead release, while dissolved lead levels are more dependent on the lengths of the stagnation periods. Preflushing of the LSL prior to regulatory sampling was found to underestimate the maximum monthly exposure to dissolved lead by 19%, while sampling at low flow rates (<5.2 LPM) was found to consistently suppress the high spikes induced by particulate lead mobilization.
Jacobs, D. K.
2014-12-01
California experiences droughts, so lets begin with the effects of streamflow variation on population evolution in a coastal lagoon-specialist endangered fish, the tidewater goby. Streamflow controls the closing and opening of lagoons to the sea determining genetic isolation or gene flow. Here evolution is a function of habitat preference for closing lagoons. Other estuarine fishes, with different habitat preferences, differentiate at larger spatial scales in response to longer glacio-eustatic control of estuarine habitat. Species of giraffes in Africa are a puzzle. Why do the ranges of large motile, potentially interbreeding, species occur in contact each other without hybridization? The answer resides in the timing of seasonal precipitation. Although the degree of seaonality of climate does not vary much between species, the timing of precipitation and seasonal "greenup" does. This provides a selective advantage to reproductive isolation, as reproductive timing can be coordinated in each region with seasonal browse availability for lactating females. Convective rainfall in Africa follows the sun and solar intensity is influenced by the precession cycle such that more extensive summer rains fell across the Sahara and South Asia early in the Holocene, this may also contribute to the genetic isolation and speciation of giraffes and others savanna species. But there also appears to be a correlation with rarity (CITES designation) of modern wetland birds, as the dramatic drying of the late Holocene landscape contributes to this conservation concern. Turning back to the West Coast we find the most diverse temperate coastal fauna in the world, yet this diversity evolved and is a relict of diversity accumulation during the apex of upwelling in the late Miocene, driven by the reglaciation of Antarctica. Lastly we can see that the deep-sea evolution is broadly constrained by the transitions from greenhouse to icehouse worlds over the last 90 mya as broad periods of warm
李怡娜; 徐学军
2011-01-01
Current literature shows that Just-in-Time (JIT) can significantly reduce lead time and inventory-related costs simultaneously. Time-based competition (TBC) focusing on the reduction of overall system lead time has been one of favorite topics for both researchers and practitioners. Lead time reduction can lower the amounts of safety stock, reduce the loss caused by stock-outs, and improve customer service levels. In today' s hyper competitive environment, lead time reduction is becoming an effective way to increase supply chain response and an important source of competitive advantage. However, in most of the traditional economic order quantity (EOQ) literature dealing with inventory problem, either using deterministic or probabilistic models, lead time is viewed as a prescribed constant or a stochastic variable. Therefore, controlling lead time is not feasible and realistic in practical situations. To overcome the issue, an increasing number of literatures consider lead time as a decision variable. Still, these literatures either consider the controllable lead time optimization problem from the perspective of a single facility, or consider inventory models with controllable lead time from the perspective of integrated supply chains. An integrated supply chain assumes that a central planner possesses perfect information and has the power to impose a globally optimal inventory policy on each entity in order to maximize overall channel performance.In this paper, we make two major contributions to the present literature on supply chain optimization problems related to controllable lead time. First, we relax the assumption in the former literature from the perspective of supply chain by asserting that long-term strategic partnerships between vendor and buyer are well established and each party can bargain and cooperate with each other to obtain an optimal integrated joint policy under centralized decision. We further assume that the vendor and the buyer aim to maximize
Maximum Photovoltaic Penetration Levels on Typical Distribution Feeders: Preprint
Hoke, A.; Butler, R.; Hambrick, J.; Kroposki, B.
2012-07-01
This paper presents simulation results for a taxonomy of typical distribution feeders with various levels of photovoltaic (PV) penetration. For each of the 16 feeders simulated, the maximum PV penetration that did not result in steady-state voltage or current violation is presented for several PV location scenarios: clustered near the feeder source, clustered near the midpoint of the feeder, clustered near the end of the feeder, randomly located, and evenly distributed. In addition, the maximum level of PV is presented for single, large PV systems at each location. Maximum PV penetration was determined by requiring that feeder voltages stay within ANSI Range A and that feeder currents stay within the ranges determined by overcurrent protection devices. Simulations were run in GridLAB-D using hourly time steps over a year with randomized load profiles based on utility data and typical meteorological year weather data. For 86% of the cases simulated, maximum PV penetration was at least 30% of peak load.
Korneev, Sergei A; Straub, Volko; Kemenes, Ildikó; Korneeva, Elena I; Ott, Swidbert R; Benjamin, Paul R; O'Shea, Michael
2005-02-02
In a number of neuronal models of learning, signaling by the neurotransmitter nitric oxide (NO), synthesized by the enzyme neuronal NO synthase (nNOS), is essential for the formation of long-term memory (LTM). Using the molluscan model system Lymnaea, we investigate here whether LTM formation is associated with specific changes in the activity of members of the NOS gene family: Lym-nNOS1, Lym-nNOS2, and the antisense RNA-producing pseudogene (anti-NOS). We show that expression of the Lym-nNOS1 gene is transiently upregulated in cerebral ganglia after conditioning. The activation of the gene is precisely timed and occurs at the end of a critical period during which NO is required for memory consolidation. Moreover, we demonstrate that this induction of the Lym-nNOS1 gene is targeted to an identified modulatory neuron called the cerebral giant cell (CGC). This neuron gates the conditioned feeding response and is an essential part of the neural network involved in LTM formation. We also show that the expression of the anti-NOS gene, which functions as a negative regulator of nNOS expression, is downregulated in the CGC by training at 4 h after conditioning, during the critical period of NO requirement. This appears to be the first report of the timed and targeted differential regulation of the activity of a group of related genes involved in the production of a neurotransmitter that is necessary for learning, measured in an identified neuron of known function. We also provide the first example of the behavioral regulation of a pseudogene.
Sytsma, Sandra
2009-01-01
This conceptual article explores the changing way of leading. It proposes that in contrast to the primarily outer actions that characterize educational change, the inner and outer dimensions of leaders are necessary to change what constitutes leading, thereby making it more appropriate to our times. The unfolding of leading actions and the…
O'Callaghan, Finbar J K; Lux, Andrew L; Darke, Katrina; Edwards, Stuart W; Hancock, Eleanor; Johnson, Anthony L; Kennedy, Colin R; Newton, Richard W; Verity, Christopher M; Osborne, John P
2011-07-01
Infantile spasms is a severe infantile seizure disorder. Several factors affect developmental outcome, especially the underlying etiology of the spasms. Treatment also affects outcome. Both age at onset of spasms and lead time to treatment (the time from onset of spasms to start of treatment) may be important. We investigated these factors. Developmental assessment using Vineland Adaptive Behaviour Scales (VABS) at 4 years of age in infants enrolled in the United Kingdom Infantile Spasms Study. Date of or age at onset of spasms was obtained prospectively. Lead time to treatment was then categorized into five categories. The effects of lead time to treatment, age of onset of spasms, etiology, and treatment on developmental outcome were investigated using multiple linear regression. Age of onset ranged (77 infants) from 2 months in 21 and not known in 6. Each month of reduction in age at onset of spasms was associated with a 3.1 [95% confidence interval (CI) 0.64-5.5, p = 0.03] decrease, and each increase in category of lead time duration associated with a 3.9 (95% CI 7.3-0.4, p = 0.014) decrease in VABS, respectively. There was a significant interaction between treatment allocation and etiology with the benefit in VABS in those allocated steroid therapy being in children with no identified etiology (coefficient 29.9, p=0.004). Both prompt diagnosis and prompt treatment of infantile spasms may help prevent subsequent developmental delay. Younger infants may be more at risk from the epileptic encephalopathy than older infants. Wiley Periodicals, Inc. © 2011 International League Against Epilepsy.
Maximum-entropy principle as Galerkin modelling paradigm
Noack, Bernd R.; Niven, Robert K.; Rowley, Clarence W.
2012-11-01
We show how the empirical Galerkin method, leading e.g. to POD models, can be derived from maximum-entropy principles building on Noack & Niven 2012 JFM. In particular, principles are proposed (1) for the Galerkin expansion, (2) for the Galerkin system identification, and (3) for the probability distribution of the attractor. Examples will illustrate the advantages of the entropic modelling paradigm. Partially supported by the ANR Chair of Excellence TUCOROM and an ADFA/UNSW Visiting Fellowship.
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
The Space-Time Relationship of Gold to Lead-Zinc Mineralization and Its Application%金与铅锌矿化的时空关系及应用
汪东波; 邵世才; 刘国平; 徐勇
2001-01-01
近10年来，在秦岭铅锌矿带和青城子铅锌矿田等地相继发现了一批大、中型金矿床。铅锌矿和金矿床共同产出在同一构造背景下，金矿床通常位于铅锌矿之上，金矿的成矿时代明显晚于铅锌矿。初步的地质-地球化学研究表明：铅锌矿床形成于具高水岩比、高卤化物和高盐度特征的海底沉积喷流系统；此时，大多数的金迁移至低温热液柱中并在沉积物中初始富集。在后期(岩浆-)变质-构造作用过程中，金被转移至中高温、低水岩比和低卤化物活度的变质流体中，在有利的构造部位发生沉淀，在同一构造单元内成矿流体的成分和循环方式的差异引起金与铅锌矿床共存和分离。金与铅锌矿床的这一时空分布关系可被视为重要的勘查标志。根据金与铅锌矿化的时空关系，笔者提出应加强对我国元古宙裂谷、晚古生代拗陷内重要铅锌矿田的金、铅锌矿的勘查工作。%In 1990s,some medium-large-size gold deposits were sucessively discovered in such lead-zinc metallogenic belts of China as the Qinling lead-zinc metallogenic belt in Shaanxi Province and Gansu Province,and the Qingchengzi lead-zinc orefield in Liaoning province.Gold deposits and lead-zinc deposits spatially coexist in the same tectonic unit,whereas lead-zinc orebodies commonly occur beneath gold orebodies,with gold mineralization being obviously younger than lead-zinc mineralization. According to the preliminary geological-geochemical study,lead-zinc precipitated from the marine sedimentary-exhalative system characterized by high water/rock ratio,high salinity and abundant chlorides; at the same time,most of gold was transported into epithermal plume and primarily accumulated in sediments.At late (magmatism) metamorphism-tectonism stage,gold migrated into the medium-high temperature metamorphic fluids characterized by low water/rock ratio and low activity of chlorides
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Bindler, Richard
2011-08-01
Clair Patterson and colleagues demonstrated already four decades ago that the lead cycle was greatly altered on a global scale by humans. Moreover, this change occurred long before the implementation of monitoring programs designed to study lead and other trace metals. Patterson and colleagues also developed stable lead isotope analyses as a tool to differentiate between natural and pollution-derived lead. Since then, stable isotope analyses of sediment, peat, herbaria collections, soils, and forest plants have given us new insights into lead biogeochemical cycling in space and time. Three important conclusions from our studies of lead in the Swedish environment conducted over the past 15 years, which are well supported by extensive results from elsewhere in Europe and in North America, are: (1) lead deposition rates at sites removed from major point sources during the twentieth century were about 1,000 times higher than natural background deposition rates a few thousand years ago (~10 mg Pb m(-2) year(-1) vs. 0.01 mg Pb m(-2) year(-1)), and even today (~1 mg Pb m(-2) year(-1)) are still almost 100 times greater than natural rates. This increase from natural background to maximum fluxes is similar to estimated changes in body burdens of lead from ancient times to the twentieth century. (2) Stable lead isotopes ((206)Pb/(207)Pb ratios shown in this paper) are an effective tool to distinguish anthropogenic lead from the natural lead present in sediments, peat, and soils for both the majority of sites receiving diffuse inputs from long range and regional sources and for sites in close proximity to point sources. In sediments >3,500 years and in the parent soil material of the C-horizon, (206)Pb/(207)Pb ratios are higher, 1.3 to >2.0, whereas pollution sources and surface soils and peat have lower ratios that have been in the range 1.14-1.18. (3) Using stable lead isotopes, we have estimated that in southern Sweden the cumulative anthropogenic burden of
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
Laurent Guiraud
2000-01-01
These crystals are made from lead tungstate, a crystal that is as clear as glass yet with nearly four times the density. They have been produced in Russia to be used as scintillators in the electromagnetic calorimeter on the CMS experiment, part of the LHC project at CERN. When an electron, positron or photon passes through the calorimeter it will cause a cascade of particles that will then be absorbed by these scintillating crystals, allowing the particle's energy to be measured.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR
SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM
1994-01-01
In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the est
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
Information Entropy Production of Spatio-Temporal Maximum Entropy Distributions
Cofre, Rodrigo
2015-01-01
Spiking activity from populations of neurons display causal interactions and memory effects. Therefore, they are expected to show some degree of irreversibility in time. Motivated by the spike train statistics, in this paper we build a framework to quantify the degree of irreversibility of any maximum entropy distribution. Our approach is based on the transfer matrix technique, which enables us to find an homogeneous irreducible Markov chain that shares the same maximum entropy measure. We provide relevant examples in the context of spike train statistics
Wang, Yong Tai; Vrongistinos, Konstantinos Dino; Xu, Dali
2008-08-01
The purposes of this study were to examine the consistency of wheelchair athletes' upper-limb kinematics in consecutive propulsive cycles and to investigate the relationship between the maximum angular velocities of the upper arm and forearm and the consistency of the upper-limb kinematical pattern. Eleven elite international wheelchair racers propelled their own chairs on a roller while performing maximum speeds during wheelchair propulsion. A Qualisys motion analysis system was used to film the wheelchair propulsive cycles. Six reflective markers placed on the right shoulder, elbow, wrist joints, metacarpal, wheel axis, and wheel were automatically digitized. The deviations in cycle time, upper-arm and forearm angles, and angular velocities among these propulsive cycles were analyzed. The results demonstrated that in the consecutive cycles of wheelchair propulsion the increased maximum angular velocity may lead to increased variability in the upper-limb angular kinematics. It is speculated that this increased variability may be important for the distribution of load on different upper-extremity muscles to avoid the fatigue during wheelchair racing.
Safe Leads and Lead Changes in Competitive Team Sports
Clauset, A; Redner, S
2015-01-01
We investigate the time evolution of lead changes within individual games of competitive team sports. Exploiting ideas from the theory of random walks, the number of lead changes within a single game follows a Gaussian distribution. We show that the probability that the last lead change and the time of the largest lead size are governed by the same arcsine law, a bimodal distribution that diverges at the start and at the end of the game. We also determine the probability that a given lead is "safe" as a function of its size $L$ and game time $t$. Our predictions generally agree with comprehensive data on more than 1.25 million scoring events in roughly 40,000 games across four professional or semi-professional team sports, and are more accurate than popular heuristics currently used in sports analytics.
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
Influence of maximum decking charge on intensity of blasting vibration
无
2006-01-01
Based on the character of short-time non-stationary random signal, the relationship between the maximum decking charge and energy distribution of blasting vibration signals was investigated by means of the wavelet packet method. Firstly, the characteristics of wavelet transform and wavelet packet analysis were described. Secondly, the blasting vibration signals were analyzed by wavelet packet based on software MATLAB, and the change of energy distribution curve at different frequency bands were obtained. Finally, the law of energy distribution of blasting vibration signals changing with the maximum decking charge was analyzed. The results show that with the increase of decking charge, the ratio of the energy of high frequency to total energy decreases, the dominant frequency bands of blasting vibration signals tend towards low frequency and blasting vibration does not depend on the maximum decking charge.
Maximum power point tracking for optimizing energy harvesting process
Akbari, S.; Thang, P. C.; Veselov, D. S.
2016-10-01
There has been a growing interest in using energy harvesting techniques for powering wireless sensor networks. The reason for utilizing this technology can be explained by the sensors limited amount of operation time which results from the finite capacity of batteries and the need for having a stable power supply in some applications. Energy can be harvested from the sun, wind, vibration, heat, etc. It is reasonable to develop multisource energy harvesting platforms for increasing the amount of harvesting energy and to mitigate the issue concerning the intermittent nature of ambient sources. In the context of solar energy harvesting, it is possible to develop algorithms for finding the optimal operation point of solar panels at which maximum power is generated. These algorithms are known as maximum power point tracking techniques. In this article, we review the concept of maximum power point tracking and provide an overview of the research conducted in this area for wireless sensor networks applications.
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Resource-constrained maximum network throughput on space networks
Yanling Xing; Ning Ge; Youzheng Wang
2015-01-01
This paper investigates the maximum network through-put for resource-constrained space networks based on the delay and disruption-tolerant networking (DTN) architecture. Specifical y, this paper proposes a methodology for calculating the maximum network throughput of multiple transmission tasks under storage and delay constraints over a space network. A mixed-integer linear programming (MILP) is formulated to solve this problem. Simula-tions results show that the proposed methodology can successful y calculate the optimal throughput of a space network under storage and delay constraints, as wel as a clear, monotonic relationship between end-to-end delay and the maximum network throughput under storage constraints. At the same time, the optimization re-sults shine light on the routing and transport protocol design in space communication, which can be used to obtain the optimal network throughput.
Maximum-likelihood fits to histograms for improved parameter estimation
Fowler, Joseph W
2013-01-01
Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Lead Poisoning Prevention Tips
... or removed safely. How are children exposed to lead? Lead-based paint and lead contaminated dust are ... What can be done to prevent exposure to lead? It is important to determine the construction year ...
... States Environmental Protection Agency Search Search Lead (Pb) Air Pollution Share Facebook Twitter Google+ Pinterest Contact Us As ... and protect aquatic and terrestrial ecosystems. Lead (Pb) Air Pollution Lead Air Pollution Basics How does lead get ...
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Matthias Thurer
2012-01-01
Full Text Available A maior parte da literatura a respeito de Planejamento e Controle da Produção (PCP é focada em estudar e desenvolver aplicativos para grandes empresas, principalmente de ambiente repetitivo. Nesse contexto, são as poucas as alternativas para o PCP para pequenas e médias empresas que fabricam sob encomenda, as quais têm papel fundamental na atual economia brasileira. Em resposta a essa demanda, o presente trabalho objetiva apresentar uma abordagem para o PCP denominada Controle de Carga (Workload Control - WLC. Essa abordagem é reconhecidamente um método com potencial para trazer melhorias no lead time, no estoque em processo e na pontualidade de entregas em ambientes que fabricam sob encomenda (Make-To-Order - MTO. Primeiramente, o artigo apresenta a abordagem WLC, revisando e discutindo os principais assuntos tratados pela literatura internacional na área. Após isso, o desempenho do WLC é avaliado por meio de simulação. Os resultados mostram o potencial de melhorias no lead time e nas ordens em atraso que o WLC pode trazer a pequenas e médias empresas que fabricam sob encomenda. Dessa forma, o presente trabalho pretende contribuir tanto para a teoria a respeito de tal abordagem, praticamente introduzindo tal assunto na literatura brasileira, como para a prática, fornecendo e demonstrando, por meio de simulação, aos gerentes de PCP, uma alternativa para suas empresas trabalharem rumo a baixos lead times e de acordo com modernos paradigmas de produção, tais como o Lean Manufacturing e o Quick Response Manufacturing.The majority of Production Planning and Control (PPC literature is focused on the study and development of solutions for large companies, especially in repetitive environments. Therefore, there are few specific PPC solutions available in the literature for small and medium sized companies (SMC operating in a Make -to-Order (MTO environment. This paper presents the Workload Control (WLC, which is a useful PPC approach
Relate the earthquake parameters to the maximum tsunami runup
Sharghivand, Naeimeh; Kânoǧlu, Utku
2016-04-01
Considering the 1 September 1992 Nicaraguan tsunami manifested itself with an initial shoreline recession, there was paradigm shift from solitary wave to an N-wave (Tadepalli and Synolakis, 1994, Proc. R. Soc. A: Math. Phys. Eng. Sci., 445, 99-112) to define the initial waveform of tsunamis (Kanoglu et al., 2015, Phil. Trans. R. Soc. A, 373: 20140369). The N-wave initial waveform shows specific features, which might enhance maximum runup at a target coastline. Tadepalli & Synolakis (1994) showed that the leading depression N-wave (LEN) run up higher than its mirror image, the leading elevation N-wave (LEN). Later, Kanoglu et al. (2013, Proc. R. Soc. A: Math. Phys. Eng. Sci., 469, 20130015) considered two-dimensional propagation of a finite crest length N-wave over a flat bottom and showed that focusing effect of an N-wave in the direction of leading depression, which enhance the runup. Recently, Kanoglu (2016, EGU Abstract)'s preliminary results suggest that later waves could be higher on the leading depression side for an N-wave, i.e., sequencing defined by Okal and Synolakis (2016, Geophys. J. Int. 204, 719-735) is more pronounced on the leading depression side. Here, we consider submarine earthquakes and estimate the initial ocean surface profiles through Okada's formulation (1985, Bull. Seismol. Soc. Am. 75, 1135-1040). We parameterize earthquake source parameters, such as the length and the width of the fault, the focal depth, the rake (slip) and the dip angles, and the slip amount. Then, we relate ocean surface profiles calculated through Okada (1985) to the generalized N-wave profile defined by Tadepalli and Synolakis (1994) and identify N-wave parameters. Since, for an N-wave type initial condition, Tadepalli and Synolakis (1994) presented maximum runup for a canonical problem -wave propagating over a constant depth segment first and then over a sloping beach- and Kanoglu (2004, J. Fluid Mech., 513, 363-372) for a sloping beach their results allow us to
Bromine pretreated chitosan for adsorption of lead (II) from water
Rajendra Dongre; Minakshi Thakur; Dinesh Ghugal; Jostna Meshram
2012-10-01
Pollution by heavy metals like lead (II) is responsible for health hazards and environmental degradation. Adsorption is a prevalent method applied for removal of heavy metal pollutants from water. This study explored adsorption performances of 30% bromine pretreated chitosan for lead (II) abatement from water. Bromine pretreatment alters porosity and specific surface area of chitosan by means of physicochemical interaction with cationic sites of chitosan skeleton, besides imparting anionic alteration at amino linkages of chitosan, to remove lead (II) by chemical interactions on superfluous active sites as characterized by FTIR, SEM, DTA and elemental analysis. Lead adsorptions were studied in batch mode by varying parameters viz. pH, bromine loading, sorbent dosage, initial lead concentration, contact time and temperature. The adsorption equilibrium data was well fitted to Freundlich isotherm and maximum sorption capacity of 30% bromine pretreated chitosan sorbent was 1.755 g/kg with 85–90% lead removal efficiency. Though cost and applicability of sorbent is unproven, yet contrast to raw chitosan derivatives, activated carbons and some resins, 30% bromine pretreated chitosan endow benign and efficient lead abatement technique.
Modher A. Hussain
2009-01-01
Full Text Available Problem statement: Conventional techniques for removing dissolved heavy metals are only practical and cost-effective when applied to high strength wastes with heavy metal ion concentrations greater than 100 ppm. The possibility of using a nonliving algal biomass to solve this problem was carried in this study. Lead (II was used in this study because it had been reported to cause several disorders in human. Approach: The nonliving algal biomass was obtained from a filamentous green alga Spirogyra neglecta. The effects of initial concentration and contact time, pH and temperature on the biosorption of lead (II by the nonliving algal biomass were studied. The equilibrium isotherms and kinetics were obtained from batch adsorption experiments. The surface characteristics of the nonliving algal biomass were examined using scanning electron microscope and Fourier Transformed Infrared. The maximum adsorption capacity of the nonliving algal biomass was also determined. Results: Maximum adsorption capacity of lead (II was affected by its initial concentration. Adsorption capacity of lead (II increased with the pH and temperature of lead (II solution. Langmuir isothermic model fitted the equilibrium data better than the Freundlich isothermic model. The adsorption kinetics followed the pseudo-second-order kinetic model. The nonliving algal biomass exhibited acaves-like, uneven surface texture along with lot of irregular surface. FTIR analysis of the alga biomass revealed the presence of carboyl, amine and carboxyl group which were responsible for adsorption of lead (II. The maximum adsorption capacity (qmax of lead (II by the nonliving biomass of Spirogyra neglecta was 132 mg g-1. Conclusion: The maximum adsorption capacity for lead (II by the nonliving biomass of Spirogyra neglecta was higher than reported for other biosorbents. Therefore, it had a great potential for removing lead (II from polluted water
Lead Aprons Are a Lead Exposure Hazard.
Burns, Kevin M; Shoag, Jamie M; Kahlon, Sukhraj S; Parsons, Patrick J; Bijur, Polly E; Taragin, Benjamin H; Markowitz, Morri
2017-05-01
To determine whether lead-containing shields have lead dust on the external surface. Institutional review board approval was obtained for this descriptive study of a convenience sample of 172 shields. Each shield was tested for external lead dust via a qualitative rapid on-site test and a laboratory-based quantitative dust wipe analysis, flame atomic absorption spectrometry (FAAS). The χ(2) test was used to test the association with age, type of shield, lead sheet thickness, storage method, and visual and radiographic appearance. Sixty-three percent (95% confidence interval [CI]: 56%-70%) of the shields had detectable surface lead by FAAS and 50% (95% CI: 43%-57%) by the qualitative method. Lead dust by FAAS ranged from undetectable to 998 μg/ft(2). The quantitative detection of lead was significantly associated with the following: (1) visual appearance of the shield (1 = best, 3 = worst): 88% of shields that scored 3 had detectable dust lead; (2) type of shield: a greater proportion of the pediatric patient, full-body, and thyroid shields were positive than vests and skirts; (3) use of a hanger for storage: 27% of shields on a hanger were positive versus 67% not on hangers. Radiographic determination of shield intactness, thickness of interior lead sheets, and age of shield were unrelated to presence of surface dust lead. Sixty-three percent of shields had detectable surface lead that was associated with visual appearance, type of shield, and storage method. Lead-containing shields are a newly identified, potentially widespread source of lead exposure in the health industry. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Estimating the maximum potential revenue for grid connected electricity storage :
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01
The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the
Lead exposure among lead-acid battery workers in Jamaica.
Matte, T D; Figueroa, J P; Burr, G; Flesch, J P; Keenlyside, R A; Baker, E L
1989-01-01
To assess lead exposure in the Jamaican lead-acid battery industry, we surveyed three battery manufacturers (including 46 production workers) and 10 battery repair shops (including 23 battery repair workers). Engineering controls and respiratory protection were judged to be inadequate at battery manufacturers and battery repair shops. At manufacturers, 38 of 42 air samples for lead exceeded a work-shift time-weighted average concentration of 0.050 mg/m3 (range 0.030-5.3 mg/m3), and nine samples exceeded 0.50 mg/m3. Only one of seven air samples at repair shops exceeded 0.050 mg/m3 (range 0.003-0.066 mg/m3). Repair shop workers, however, had higher blood lead levels than manufacturing workers (65% vs. 28% with blood lead levels above 60 micrograms/dl, respectively). Manufacturing workers had a higher prevalence of safe hygienic practices and a recent interval of minimal production had occurred at one of the battery manufacturers. Workers with blood lead levels above 60 micrograms/dl tended to have higher prevalences of most symptoms of lead toxicity than did workers with lower blood lead levels, but this finding was not consistent or statistically significant. The relationship between zinc protoporphyrin concentrations and increasing blood lead concentrations was consistent with that described among workers in developed countries. The high risk of lead toxicity among Jamaican battery workers is consistent with studies of battery workers in other developing countries.
Maximum-entropy closure of hydrodynamic moment hierarchies including correlations.
Hughes, Keith H; Burghardt, Irene
2012-06-07
Generalized hydrodynamic moment hierarchies are derived which explicitly include nonequilibrium two-particle and higher-order correlations. The approach is adapted to strongly correlated media and nonequilibrium processes on short time scales which necessitate an explicit treatment of time-evolving correlations. Closure conditions for the extended moment hierarchies are formulated by a maximum-entropy approach, generalizing related closure procedures for kinetic equations. A self-consistent set of nonperturbative dynamical equations are thus obtained for a chosen set of single-particle and two-particle (and possibly higher-order) moments. Analytical results are derived for generalized Gaussian closures including the dynamic pair distribution function and a two-particle correction to the current density. The maximum-entropy closure conditions are found to involve the Kirkwood superposition approximation.
Mitigation of maximum world oil production: Shortage scenarios
Hirsch, Robert L. [Management Information Services, Inc., 723 Fords Landing Way, Alexandria, VA 22314 (United States)
2008-02-15
A framework is developed for planning the mitigation of the oil shortages that will be caused by world oil production reaching a maximum and going into decline. To estimate potential economic impacts, a reasonable relationship between percent decline in world oil supply and percent decline in world GDP was determined to be roughly 1:1. As a limiting case for decline rates, giant fields were examined. Actual oil production from Europe and North America indicated significant periods of relatively flat oil production (plateaus). However, before entering its plateau period, North American oil production went through a sharp peak and steep decline. Examination of a number of future world oil production forecasts showed multi-year rollover/roll-down periods, which represent pseudoplateaus. Consideration of resource nationalism posits an Oil Exporter Withholding Scenario, which could potentially overwhelm all other considerations. Three scenarios for mitigation planning resulted from this analysis: (1) A Best Case, where maximum world oil production is followed by a multi-year plateau before the onset of a monatomic decline rate of 2-5% per year; (2) A Middling Case, where world oil production reaches a maximum, after which it drops into a long-term, 2-5% monotonic annual decline; and finally (3) A Worst Case, where the sharp peak of the Middling Case is degraded by oil exporter withholding, leading to world oil shortages growing potentially more rapidly than 2-5% per year, creating the most dire world economic impacts. (author)
An improved maximum power point tracking method for photovoltaic systems
Tafticht, T.; Agbossou, K.; Doumbia, M.L.; Cheriti, A. [Institut de recherche sur l' hydrogene, Departement de genie electrique et genie informatique, Universite du Quebec a Trois-Rivieres, C.P. 500, Trois-Rivieres (QC) (Canada)
2008-07-15
In most of the maximum power point tracking (MPPT) methods described currently in the literature, the optimal operation point of the photovoltaic (PV) systems is estimated by linear approximations. However these approximations can lead to less than optimal operating conditions and hence reduce considerably the performances of the PV system. This paper proposes a new approach to determine the maximum power point (MPP) based on measurements of the open-circuit voltage of the PV modules, and a nonlinear expression for the optimal operating voltage is developed based on this open-circuit voltage. The approach is thus a combination of the nonlinear and perturbation and observation (P and O) methods. The experimental results show that the approach improves clearly the tracking efficiency of the maximum power available at the output of the PV modules. The new method reduces the oscillations around the MPP, and increases the average efficiency of the MPPT obtained. The new MPPT method will deliver more power to any generic load or energy storage media. (author)
An application of Hamiltonian neurodynamics using Pontryagin's Maximum (Minimum) Principle.
Koshizen, T; Fulcher, J
1995-12-01
Classical optimal control methods, notably Pontryagin's Maximum (Minimum) Principle (PMP) can be employed, together with Hamiltonians, to determine optimal system weights in Artificial Neural dynamical systems. A new learning rule based on weight equations derived using PMP is shown to be suitable for both discrete- and continuous-time systems, and moreover, can also be applied to feedback networks. Preliminary testing shows that this PMP learning rule compares favorably with Standard BackPropagations (SBP) on the XOR problem.
A probabilistic approach to the concept of Probable Maximum Precipitation
Papalexiou, S. M.; D. Koutsoyiannis
2006-01-01
International audience; The concept of Probable Maximum Precipitation (PMP) is based on the assumptions that (a) there exists an upper physical limit of the precipitation depth over a given area at a particular geographical location at a certain time of year, and (b) that this limit can be estimated based on deterministic considerations. The most representative and widespread estimation method of PMP is the so-called moisture maximization method. This method maximizes observed storms assuming...
Maximum Likelihood Sequence Detection Receivers for Nonlinear Optical Channels
2015-01-01
The space-time whitened matched filter (ST-WMF) maximum likelihood sequence detection (MLSD) architecture has been recently proposed (Maggio et al., 2014). Its objective is reducing implementation complexity in transmissions over nonlinear dispersive channels. The ST-WMF-MLSD receiver (i) drastically reduces the number of states of the Viterbi decoder (VD) and (ii) offers a smooth trade-off between performance and complexity. In this work the ST-WMF-MLSD receiver is investigated in detail. We...
Luciane M. Steffen
2004-08-01
Full Text Available A paralisia de prega vocal (PPV decorre da lesão do nervo vago ou de seus ramos, podendo levar a alterações das funções que requerem o fechamento glótico. O tempo máximo de fonação (TMF é um teste aplicado rotineiramente em pacientes disfônicospara avaliar a eficiência glótica e freqüentemente utilizado em casos de PPV, cujos valores encontram-se diminuídos. A classificação clínica clássica da posição da prega vocal paralisada em mediana, para-mediana, intermediária e em abdução ou cadavérica tem sido objeto de controvérsias. OBJETIVO: Verificar a associação e correlação entre os TMF e posição da prega vocal paralisada (PVP, TMF e ângulo de afastamento da PVP, medir o ângulo de afastamento da linha média das diferentes posições da PVP e correlacioná-lo com a sua classificação clínica FORMA DE ESTUDO: Clínico retrospectivo. MATERIAL E MÉTODO: Foram revisados os prontuários e analisados os exames videoendoscópicos de 86 indivíduos com paralisia de prega vocal unilateral e medido o ângulo de afastamento da PVP por meio de um programa computadorizado. RESULTADOS: A associação e correlação entre os TMF em cada posição assumida pela PVP têm significância estatística somente para /z/ na posição mediana. A associação e correlação entre TMF com ângulo de afastamento da PVP guardam relação para /i/, /u/. Ao associar e correlacionar medidas de ângulo com posição observa-se significância estatística em posição de abdução. CONCLUSÕES: Neste estudo não foi possível determinar as posições assumidas pela PVP por meio dos TMF nem correlacioná-las com medidas do ângulo.Vocal fold paralysis (VFP is due to an injury of the vagus nerve or one of its branches and may cause dysfunctions in the glottic competence. The Maximum Phonation Time (MPT is a test usually applied on dysphonic patients to assess glottic efficiency, mainly in patients with VFP and a decreased phonation time. The
周欣; 霍佳震
2011-01-01
To cope with small lot sizes and frequent deliveries in lean logistics, manufacturers have the choices of direct shipping or milk runs in inbound logistics. The variances of total lead time caused by the uncertainty in each supplier's production processes are different in these two delivery modes and directly affect the optimal procurement policies and the choices of delivery modes. A multi-item supply chain consisting of one manufacturer and multiple suppliers was considered. Considering the effect of variance of total lead time on procurement policies and the choices of delivery modes, we develop inventories models based on stochastic lead time and limited capacities in direct shipping and milk runs, respectively, and obtain the optimal procurement policies. A numerical example is presented to illustrate the results. Comparing the effect of parameter change on the total costs, the conditions are given for these two delivery modes.%为适应精益化物流小批量、多频次的配送作业运作要求,很多企业在供应物流环节面临着是循环取货还是直接运输的选择.配送过程中各供应商生产过程的不确定性导致了总提前期的波动,由于在2种配送方式下总提前期的波动不同,因此,提前期的波动直接影响到最优采购策略以及配送方式的选择.以由制造商和多供应商组成的多产品供应链为背景展开研究,考虑到提前期的波动对采购方式和采购策略的影响,基于随机提前期且考虑车载量约束,分别建立了直接运输和循环取货2种配送方式下的库存优化模型,并进一步求解得出了各自的最优采购策略.最后,通过数值分析对上述研究结果进行了论证,并比较了参数变化对2种配送方式下总成本的影响,给出了2种配送方式的具体适用条件.