Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Training Concept, Evolution Time, and the Maximum Entropy Production Principle
Alexey Bezryadin
2016-04-01
Full Text Available The maximum entropy production principle (MEPP is a type of entropy optimization which demands that complex non-equilibrium systems should organize such that the rate of the entropy production is maximized. Our take on this principle is that to prove or disprove the validity of the MEPP and to test the scope of its applicability, it is necessary to conduct experiments in which the entropy produced per unit time is measured with a high precision. Thus we study electric-field-induced self-assembly in suspensions of carbon nanotubes and realize precise measurements of the entropy production rate (EPR. As a strong voltage is applied the suspended nanotubes merge together into a conducting cloud which produces Joule heat and, correspondingly, produces entropy. We introduce two types of EPR, which have qualitatively different significance: global EPR (g-EPR and the entropy production rate of the dissipative cloud itself (DC-EPR. The following results are obtained: (1 As the system reaches the maximum of the DC-EPR, it becomes stable because the applied voltage acts as a stabilizing thermodynamic potential; (2 We discover metastable states characterized by high, near-maximum values of the DC-EPR. Under certain conditions, such efficient entropy-producing regimes can only be achieved if the system is allowed to initially evolve under mildly non-equilibrium conditions, namely at a reduced voltage; (3 Without such a “training” period the system typically is not able to reach the allowed maximum of the DC-EPR if the bias is high; (4 We observe that the DC-EPR maximum is achieved within a time, Te, the evolution time, which scales as a power-law function of the applied voltage; (5 Finally, we present a clear example in which the g-EPR theoretical maximum can never be achieved. Yet, under a wide range of conditions, the system can self-organize and achieve a dissipative regime in which the DC-EPR equals its theoretical maximum.
Jun He; Xin Yao
2004-01-01
Most of works on the time complexity analysis of evolutionary algorithms have always focused on some artificial binary problems.The time complexity of the algorithms for combinatorial optimisation has not been well understood.This paper considers the time complexity of an evolutionary algorithm for a classical combinatorial optimisation problem,to find the maximum cardinality matching in a graph.It is shown that the evolutionary algorithm can produce a matching with nearly maximum cardinality in average polynomial time.
Time series analysis by the Maximum Entropy method
Kirk, B.L.; Rust, B.W.; Van Winkle, W.
1979-01-01
The principal subject of this report is the use of the Maximum Entropy method for spectral analysis of time series. The classical Fourier method is also discussed, mainly as a standard for comparison with the Maximum Entropy method. Examples are given which clearly demonstrate the superiority of the latter method over the former when the time series is short. The report also includes a chapter outlining the theory of the method, a discussion of the effects of noise in the data, a chapter on significance tests, a discussion of the problem of choosing the prediction filter length, and, most importantly, a description of a package of FORTRAN subroutines for making the various calculations. Cross-referenced program listings are given in the appendices. The report also includes a chapter demonstrating the use of the programs by means of an example. Real time series like the lynx data and sunspot numbers are also analyzed. 22 figures, 21 tables, 53 references.
Efficient maximum likelihood parameterization of continuous-time Markov processes
McGibbon, Robert T
2015-01-01
Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce an maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is drastically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations.
Minimizing Maximum Response Time and Delay Factor in Broadcast Scheduling
Chekuri, Chandra; Moseley, Benjamin
2009-01-01
We consider online algorithms for pull-based broadcast scheduling. In this setting there are n pages of information at a server and requests for pages arrive online. When the server serves (broadcasts) a page p, all outstanding requests for that page are satisfied. We study two related metrics, namely maximum response time (waiting time) and maximum delay-factor and their weighted versions. We obtain the following results in the worst-case online competitive model. - We show that FIFO (first-in first-out) is 2-competitive even when the page sizes are different. Previously this was known only for unit-sized pages [10] via a delicate argument. Our proof differs from [10] and is perhaps more intuitive. - We give an online algorithm for maximum delay-factor that is O(1/eps^2)-competitive with (1+\\eps)-speed for unit-sized pages and with (2+\\eps)-speed for different sized pages. This improves on the algorithm in [12] which required (2+\\eps)-speed and (4+\\eps)-speed respectively. In addition we show that the algori...
Improving predictability of time series using maximum entropy methods
Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.
2015-04-01
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
Time-Reversal Acoustics and Maximum-Entropy Imaging
Berryman, J G
2001-08-22
Target location is a common problem in acoustical imaging using either passive or active data inversion. Time-reversal methods in acoustics have the important characteristic that they provide a means of determining the eigenfunctions and eigenvalues of the scattering operator for either of these problems. Each eigenfunction may often be approximately associated with an individual scatterer. The resulting decoupling of the scattered field from a collection of targets is a very useful aid to localizing the targets, and suggests a number of imaging and localization algorithms. Two of these are linear subspace methods and maximum-entropy imaging.
Dave, Jaydev K; Forsberg, Flemming
2009-09-01
The aim of this study was to develop a novel automated motion compensation algorithm for producing cumulative maximum intensity (CMI) images from subharmonic imaging (SHI) of breast lesions. SHI is a nonlinear contrast-specific ultrasound imaging technique in which pulses are received at half the frequency of the transmitted pulses. A Logiq 9 scanner (GE Healthcare, Milwaukee, WI, USA) was modified to operate in grayscale SHI mode (transmitting/receiving at 4.4/2.2 MHz) and used to scan 14 women with 16 breast lesions. Manual CMI images were reconstructed by temporal maximum-intensity projection of pixels traced from the first frame to the last. In the new automated technique, the user selects a kernel in the first frame and the algorithm then uses the sum of absolute difference (SAD) technique to identify motion-induced displacements in the remaining frames. A reliability parameter was used to estimate the accuracy of the motion tracking based on the ratio of the minimum SAD to the average SAD. Two thresholds (the mean and 85% of the mean reliability parameter) were used to eliminate images plagued by excessive motion and/or noise. The automated algorithm was compared with the manual technique for computational time, correction of motion artifacts, removal of noisy frames and quality of the final image. The automated algorithm compensated for motion artifacts and noisy frames. The computational time was 2 min compared with 60-90 minutes for the manual method. The quality of the motion-compensated CMI-SHI images generated by the automated technique was comparable to the manual method and provided a snapshot of the microvasculature showing interconnections between vessels, which was less evident in the original data. In conclusion, an automated algorithm for producing CMI-SHI images has been developed. It eliminates the need for manual processing and yields reproducible images, thereby increasing the throughput and efficiency of reconstructing CMI-SHI images. The
On the maximum-entropy/autoregressive modeling of time series
Chao, B. F.
1984-01-01
The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.
Mu, Yang; Yang, Hou-Yun; Wang, Ya-Zhou; He, Chuan-Shu; Zhao, Quan-Bao; Wang, Yi; Yu, Han-Qing
2014-06-01
Fermentative hydrogen production from wastes has many advantages compared to various chemical methods. Methodology for characterizing the hydrogen-producing activity of anaerobic mixed cultures is essential for monitoring reactor operation in fermentative hydrogen production, however there is lack of such kind of standardized methodologies. In the present study, a new index, i.e., the maximum specific hydrogen-producing activity (SHAm) of anaerobic mixed cultures, was proposed, and consequently a reliable and simple method, named SHAm test, was developed to determine it. Furthermore, the influences of various parameters on the SHAm value determination of anaerobic mixed cultures were evaluated. Additionally, this SHAm assay was tested for different types of substrates and bacterial inocula. Our results demonstrate that this novel SHAm assay was a rapid, accurate and simple methodology for determining the hydrogen-producing activity of anaerobic mixed cultures. Thus, application of this approach is beneficial to establishing a stable anaerobic hydrogen-producing system.
Maximum Time Separation of Events in Cyclic Systems with Linear and Latest Timing Constraints
Jin, Fen; Hulgaard, Henrik; Cerny, Eduard
1998-01-01
The determination of the maximum time separations of events is important in the design, synthesis, and verification of digital systems, especially in interface timing verification. Many researchers have explored solutions to the problem with various restrictions: a) on the type of constraints......, and b) on whether the events in the specification are allowed to occur repeatedly. When the events can occur only once, the problem is well solved. There are fewer concrete results for systems where the events can occur repeatedly. We extend the work by Hulgaard et al.\\ for computing the maximum...
Maximum holding endurance time: Effects of load and load's center of gravity height.
Lee, Tzu-Hsien
2015-01-01
Manual holding task is a potential risk to the development of musculoskeletal injuries since it is prone to induce localized muscle fatigue. Maximum holding endurance time is a significant parameter for the design of manual holding task. This study aimed to examine the effects of load and load's COG height on maximum holding endurance time. Fifteen young and healthy males were recruited as participants. A factorial design was used to examine the effects of load and load's COG height on maximum holding endurance time. Four levels of load (15% , 30% , 45% and 60% of the participant's maximum holding capacity) and two levels of load's COG height in box (0 cm and 40 cm high from the handle position) were examined. Maximum holding endurance time decreased with increasing load and/or increasing load's COG height. The effect of load's COG height on maximum holding endurance time decreased with increasing load. Load, load's COG height, and the interaction of load and load's COG height significantly affected maximum holding endurance time. Practitioners should realize the effects of load, load's COG height, and the interaction of load and load's COG height on maximum holding endurance time when setting the working conditions of holding tasks.
Regular transport dynamics produce chaotic travel times.
Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro
2014-06-01
In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.
Liu, Jian; Miller, William H.
2008-08-01
The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. The LSC-IVR provides a very effective 'prior' for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25K and 14K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR, for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T = 25K, but the MEAC procedure produces a significant correction at the lower temperature (T = 14K). Comparisons are also made to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.
Schaefer, Andreas; Wenzel, Friedemann
2017-04-01
technically trades time with space, considering subduction zones where we have likely not observed the maximum possible event yet. However, by identifying sources of the same class, the not-yet observed temporal behavior can be replaced by spatial similarity among different subduction zones. This database aims to enhance the research and understanding of subduction zones and to quantify their potential in producing mega earthquakes considering potential strong motion impact on nearby cities and their tsunami potential.
Zhou, Si-Da; Heylen, Ward; Sas, Paul; Liu, Li
2014-05-01
This paper investigates the problem of modal parameter estimation of time-varying structures under unknown excitation. A time-frequency-domain maximum likelihood estimator of modal parameters for linear time-varying structures is presented by adapting the frequency-domain maximum likelihood estimator to the time-frequency domain. The proposed estimator is parametric, that is, the linear time-varying structures are represented by a time-dependent common-denominator model. To adapt the existing frequency-domain estimator for time-invariant structures to the time-frequency methods for time-varying cases, an orthogonal polynomial and z-domain mapping hybrid basis function is presented, which has the advantageous numerical condition and with which it is convenient to calculate the modal parameters. A series of numerical examples have evaluated and illustrated the performance of the proposed maximum likelihood estimator, and a group of laboratory experiments has further validated the proposed estimator.
Stochastic behavior of a cold standby system with maximum repair time
Ashish Kumar
2015-09-01
Full Text Available The main aim of the present paper is to analyze the stochastic behavior of a cold standby system with concept of preventive maintenance, priority and maximum repair time. For this purpose, a stochastic model is developed in which initially one unit is operative and other is kept as cold standby. There is a single server who visits the system immediately as and when required. The server takes the unit under preventive maintenance after a maximum operation time at normal mode if one standby unit is available for operation. If the repair of the failed unit is not possible up to a maximum repair time, failed unit is replaced by new one. The failure time, maximum operation time and maximum repair time distributions of the unit are considered as exponentially distributed while repair and maintenance time distributions are considered as arbitrary. All random variables are statistically independent and repairs are perfect. Various measures of system effectiveness are obtained by using the technique of semi-Markov process and RPT. To highlight the importance of the study numerical results are also obtained for MTSF, availability and profit function.
A Maximum Time Difference Pipelined Arithmetic Unit Based on CMOS Gate Array
唐志敏; 夏培肃
1995-01-01
This paper describes a maximum time difference pipelined arithmetic chip,the 36-bit adder and subtractor based on 1.5μm CMOS gate array.The chip can operate at 60MHz,and consumes less than 0.5Watt.The results are also studied,and a more precise model of delay time difference is proposed.
Eun-Chan Kim
2016-06-01
Full Text Available The International Convention for the Control and Management of Ships’ Ballast Water and Sediments was adopted by IMO (International Maritime Organization on 13 February 2004. Fifty-seven ballast water management systems were granted basic approval of active substance by IMO, among which thirty-seven systems were granted final approval. This paper studies the maximum allowable dosage of active substances produced by ballast water management system using electrolysis which is an approved management system by IMO. The allowable dosage of active substances by electrolysis system is proposed by TRO (Total Residual Oxidant. Maximum allowable dosage of TRO is a very important factor in the ballast water management system when using the electrolysis methods, because ballast water management system is controlled with the TRO value, and the IMO approvals are given on the basis of the maximum allowable dosage of TRO for the treatment and discharge of ballast water. However, between various management systems approved TRO concentration of maximum allowable dosage showed large differences, ranging from 1 to 15 ppm, depending on the management systems. The discrepancies of maximum allowable dosage among the management systems may depend on whether a filter is used or not, the difference in the specifications of the electrolysis module, the kind of the tested organisms, the number of individual organisms, and the difference in the water quality, etc. Ship owners are responsible for satisfying the performance standard of the IMO convention in the ports of each country therefore need to carefully review whether the ballast water management system can satisfy the performance standard of the IMO convention or not.
Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems
Hakan A. Çırpan
2002-05-01
Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.
Estimating the Size and Timing of the Maximum Amplitude of Solar Cycle 24
Ke-Jun Li; Peng-Xin Gao; Tong-Wei Su
2005-01-01
A simple statistical method is used to estimate the size and timing of maximum amplitude of the next solar cycle (cycle 24). Presuming cycle 23 to be a short cycle (as is more likely), the minimum of cycle 24 should occur about December 2006 (±2 months) and the maximum, around March 2011 (±9 months),and the amplitude is 189.9 ± 15.5, if it is a fast riser, or about 136, if it is a slow riser. If we presume cycle 23 to be a long cycle (as is less likely), the minimum of cycle 24 should occur about June 2008 (±2 months) and the maximum, about February 2013 (±8 months) and the maximum will be about 137 or 80, according as the cycle is a fast riser or a slow riser.
Maximum Likelihood Estimation of Time-Varying Loadings in High-Dimensional Factor Models
Mikkelsen, Jakob Guldbæk; Hillebrand, Eric; Urga, Giovanni
In this paper, we develop a maximum likelihood estimator of time-varying loadings in high-dimensional factor models. We specify the loadings to evolve as stationary vector autoregressions (VAR) and show that consistent estimates of the loadings parameters can be obtained by a two-step maximum...... likelihood estimation procedure. In the first step, principal components are extracted from the data to form factor estimates. In the second step, the parameters of the loadings VARs are estimated as a set of univariate regression models with time-varying coefficients. We document the finite...
Evaluating the time limit at maximum aerobic speed in elite swimmers. Training implications.
Renoux, J C
2001-12-01
The aim of the present study was to make use of the concepts of maximum aerobic speed (MAS) and time limit (tlim) in order to determine the relationship between these two elements, and this in an attempt to significantly improve both speed and swimming performance during a training season. To this same end, an intermittent training model was used, which was adapted to the value obtained for the time limit at maximum aerobic speed. During a 12 week training period, the maximum aerobic speed for a group of 9 top-ranking varsity swimmers was measured on two occasions, as was the tlim. The values generated indicated that: 1) there was an inverse relationship between MAS and the time this speed could be maintained, thus confirming the studies by Billat et al. (1994b); 2) a significant increase in MAS occurred over the 12 week period, although no such evolution was seen for the tlim; 3) there was an improvement in results; 4) the time limit could be used in designing a training program based on intermittent exercises. In addition, results of the present study should allow swimming coaches to draw up individualized training programs for a given swimmer by taking into consideration maximum aerobic speed, time limit and propelling efficiency.
Kirkegaard, Poul Henning; Nielsen, Søren R.K.; Micaletti, R. C.;
This paper considers estimation of the Maximum Damage Indicator (MSDI) by using time-frequency system identification techniques for an RC-structure subjected to earthquake excitation. The MSDI relates the global damage state of the RC-structure to the relative decrease of the fundamental eigenfre...
Recommended maximum holding times for prevention of discomfort of static standing postures
Miedema, M.C.; Douwes, M.; Dul, J.
1997-01-01
The aim of the present study was threefold; (1) to analyze the influence of posture on the maximum holding time (MHT), (2) to study the possibility of classifying postures on the basis of MHT, and (3) to develop ergonomic recommendations for the MHT of categories of postures. For these purposes data
Efficiency at maximum power output of quantum heat engines under finite-time operation
Wang, Jianhui; He, Jizhou; Wu, Zhaoqi
2012-03-01
We study the efficiency at maximum power, ηm, of irreversible quantum Carnot engines (QCEs) that perform finite-time cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For QCEs in the reversible limit (long cycle period, zero dissipation), ηm becomes identical to the Carnot efficiency ηC=1-Tc/Th. For QCE cycles in which nonadiabatic dissipation and the time spent on two adiabats are included, the efficiency ηm at maximum power output is bounded from above by ηC/(2-ηC) and from below by ηC/2. In the case of symmetric dissipation, the Curzon-Ahlborn efficiency ηCA=1-Tc/Th is recovered under the condition that the time allocation between the adiabats and the contact time with the reservoir satisfy a certain relation.
Efficiency at maximum power output of quantum heat engines under finite-time operation.
Wang, Jianhui; He, Jizhou; Wu, Zhaoqi
2012-03-01
We study the efficiency at maximum power, η(m), of irreversible quantum Carnot engines (QCEs) that perform finite-time cycles between a hot and a cold reservoir at temperatures T(h) and T(c), respectively. For QCEs in the reversible limit (long cycle period, zero dissipation), η(m) becomes identical to the Carnot efficiency η(C)=1-T(c)/T(h). For QCE cycles in which nonadiabatic dissipation and the time spent on two adiabats are included, the efficiency η(m) at maximum power output is bounded from above by η(C)/(2-η(C)) and from below by η(C)/2. In the case of symmetric dissipation, the Curzon-Ahlborn efficiency η(CA)=1-√(T(c)/T(h)) is recovered under the condition that the time allocation between the adiabats and the contact time with the reservoir satisfy a certain relation.
Impact maturity times and citation time windows: The 2-year maximum journal impact factor
Dorta-Gonzalez, Pablo
2013-01-01
Journal metrics are employed for the assessment of scientific scholar journals from a general bibliometric perspective. In this context, the Thomson Reuters journal impact factors (JIF) are the citation-based indicators most used. The 2-year journal impact factor (2-JIF) counts citations to one and two year old articles, while the 5-year journal impact factor (5-JIF) counts citations from one to five year old articles. Nevertheless, these indicators are not comparable among fields of science for two reasons: (i) each field has a different impact maturity time, and (ii) because of systematic differences in publication and citation behaviour across disciplines. In fact, the 5-JIF firstly appeared in the Journal Citation Reports (JCR) in 2007 with the purpose of making more comparable impacts in fields in which impact matures slowly. However, there is not an optimal fixed impact maturity time valid for all the fields. In some of them two years provides a good performance whereas in others three or more years are...
Becker, Joseph F.; Valentin, Jose
1996-01-01
The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks †
Wang, Zhaowei; Zeng, Peng; Zhou, Mingtuo; Li, Dong; Wang, Jintao
2017-01-01
Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs), and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs’ demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS) method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays. PMID:28098750
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks
Zhaowei Wang
2017-01-01
Full Text Available Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs, and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs’ demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays.
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks.
Wang, Zhaowei; Zeng, Peng; Zhou, Mingtuo; Li, Dong; Wang, Jintao
2017-01-13
Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs), and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs' demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS) method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays.
Maximum-Likelihood Detection for Energy-Efficient Timing Acquisition in NB-IoT
2016-01-01
Initial timing acquisition in narrow-band IoT (NB-IoT) devices is done by detecting a periodically transmitted known sequence. The detection has to be done at lowest possible latency, because the RF-transceiver, which dominates downlink power consumption of an NB-IoT modem, has to be turned on throughout this time. Auto-correlation detectors show low computational complexity from a signal processing point of view at the price of a higher detection latency. In contrast a maximum likelihood cro...
Prediction of maximum magnitude and original time of reservoir induced seismicity
无
2001-01-01
This paper deals with the prediction of potentially maximum magnitude and origin time for reservoir induced seismicity (RIS). The factor and sign of seismology and geology of RIS has been studied, and the information quantity for magnitude of induced seismicity provided by them has been calculated. In terms of information quan-tity the biggest possible magnitude of RIS is determined. The changes of seismic frequency with time are studied using grey model method, and the time of the biggest change rate is taken as original time of the main shock. The feasibility of methods for predicting magnitude and time has been tested for the reservoir induced seismicity in the Xinfengjiang reservoir, China and the Koyna reservoir, India.
A real-time maximum-likelihood heart-rate estimator for wearable textile sensors.
Cheng, Mu-Huo; Chen, Li-Chung; Hung, Ying-Che; Yang, Chang Ming
2008-01-01
This paper presents a real-time maximum-likelihood heart-rate estimator for ECG data measured via wearable textile sensors. The ECG signals measured from wearable dry electrodes are notorious for its susceptibility to interference from the respiration or the motion of wearing person such that the signal quality may degrade dramatically. To overcome these obstacles, in the proposed heart-rate estimator we first employ the subspace approach to remove the wandering baseline, then use a simple nonlinear absolute operation to reduce the high-frequency noise contamination, and finally apply the maximum likelihood estimation technique for estimating the interval of R-R peaks. A parameter derived from the byproduct of maximum likelihood estimation is also proposed as an indicator for signal quality. To achieve the goal of real-time, we develop a simple adaptive algorithm from the numerical power method to realize the subspace filter and apply the fast-Fourier transform (FFT) technique for realization of the correlation technique such that the whole estimator can be implemented in an FPGA system. Experiments are performed to demonstrate the viability of the proposed system.
STATIONARITY OF ANNUAL MAXIMUM DAILY STREAMFLOW TIME SERIES IN SOUTH-EAST BRAZILIAN RIVERS
Jorge Machado Damázio
2015-08-01
Full Text Available DOI: 10.12957/cadest.2014.18302The paper presents a statistical analysis of annual maxima daily streamflow between 1931 and 2013 in South-East Brazil focused in detecting and modelling non-stationarity aspects. Flood protection for the large valleys in South-East Brazil is provided by multiple purpose reservoir systems built during 20th century, which design and operation plans has been done assuming stationarity of historical flood time series. Land cover changes and rapidly-increasing level of atmosphere greenhouse gases of the last century may be affecting flood regimes in these valleys so that it can be that nonstationary modelling should be applied to re-asses dam safety and flood control operation rules at the existent reservoir system. Six annual maximum daily streamflow time series are analysed. The time series were plotted together with fitted smooth loess functions and non-parametric statistical tests are performed to check the significance of apparent trends shown by the plots. Non-stationarity is modelled by fitting univariate extreme value distribution functions which location varies linearly with time. Stationarity and non-stationarity modelling are compared with the likelihood ratio statistic. In four of the six analyzed time series non-stationarity modelling outperformed stationarity modelling.Keywords: Stationarity; Extreme Value Distributions; Flood Frequency Analysis; Maximum Likelihood Method.
The Research of Car-Following Model Based on Real-Time Maximum Deceleration
Longhai Yang
2015-01-01
Full Text Available This paper is concerned with the effect of real-time maximum deceleration in car-following. The real-time maximum acceleration is estimated with vehicle dynamics. It is known that an intelligent driver model (IDM can control adaptive cruise control (ACC well. The disadvantages of IDM at high and constant speed are analyzed. A new car-following model which is applied to ACC is established accordingly to modify the desired minimum gap and structure of the IDM. We simulated the new car-following model and IDM under two different kinds of road conditions. In the first, the vehicles drive on a single road, taking dry asphalt road as the example in this paper. In the second, vehicles drive onto a different road, and this paper analyzed the situation in which vehicles drive from a dry asphalt road onto an icy road. From the simulation, we found that the new car-following model can not only ensure driving security and comfort but also control the steady driving of the vehicle with a smaller time headway than IDM.
Fiebig, H R
2002-01-01
We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss practical issues of the approach.
Maximum number of habitable planets at the time of Earth's origin: new hints for panspermia?
von Bloh, Werner; Franck, Siegfried; Bounama, Christine; Schellnhuber, Hans-Joachim
2003-04-01
New discoveries have fuelled the ongoing discussion of panspermia, i.e. the transport of life from one planet to another within the solar system (interplanetary panspermia) or even between different planetary systems (interstellar panspermia). The main factor for the probability of interstellar panspermia is the average density of stellar systems containing habitable planets. The combination of recent results for the formation rate of Earth-like planets with our estimations of extrasolar habitable zones allows us to determine the number of habitable planets in the Milky Way over cosmological time scales. We find that there was a maximum number of habitable planets around the time of Earth's origin. If at all, interstellar panspermia was most probable at that time and may have kick-started life on our planet.
Real-time maximum a-posteriori image reconstruction for fluorescence microscopy
Anwar A. Jabbar
2015-08-01
Full Text Available Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU based real-time maximum a-posteriori (MAP image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA efficiently execute the iterative image reconstruction algorithm that is ≈200-fold faster (for large dataset when compared to existing CPU based systems.
Real-time maximum a-posteriori image reconstruction for fluorescence microscopy
Jabbar, Anwar A.; Dilipkumar, Shilpa; C K, Rasmi; Rajan, K.; Mondal, Partha P.
2015-08-01
Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU) based real-time maximum a-posteriori (MAP) image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads) results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA) efficiently execute the iterative image reconstruction algorithm that is ≈200-fold faster (for large dataset) when compared to existing CPU based systems.
Onset of effects of testosterone treatment and time span until maximum effects are achieved
Saad, Farid; Aversa, Antonio; Isidori, Andrea M; Zafalon, Livia; Zitzmann, Michael; Gooren, Louis
2011-01-01
Objective Testosterone has a spectrum of effects on the male organism. This review attempts to determine, from published studies, the time-course of the effects induced by testosterone replacement therapy from their first manifestation until maximum effects are attained. Design Literature data on testosterone replacement. Results Effects on sexual interest appear after 3 weeks plateauing at 6 weeks, with no further increments expected beyond. Changes in erections/ejaculations may require up to 6 months. Effects on quality of life manifest within 3–4 weeks, but maximum benefits take longer. Effects on depressive mood become detectable after 3–6 weeks with a maximum after 18–30 weeks. Effects on erythropoiesis are evident at 3 months, peaking at 9–12 months. Prostate-specific antigen and volume rise, marginally, plateauing at 12 months; further increase should be related to aging rather than therapy. Effects on lipids appear after 4 weeks, maximal after 6–12 months. Insulin sensitivity may improve within few days, but effects on glycemic control become evident only after 3–12 months. Changes in fat mass, lean body mass, and muscle strength occur within 12–16 weeks, stabilize at 6–12 months, but can marginally continue over years. Effects on inflammation occur within 3–12 weeks. Effects on bone are detectable already after 6 months while continuing at least for 3 years. Conclusion The time-course of the spectrum of effects of testosterone shows considerable variation, probably related to pharmacodynamics of the testosterone preparation. Genomic and non-genomic effects, androgen receptor polymorphism and intracellular steroid metabolism further contribute to such diversity. PMID:21753068
Producing complex spoken numerals for time and space
Meeuwissen, M.H.W.
2004-01-01
This thesis addressed the spoken production of complex numerals for time and space. The production of complex numerical expressions like those involved in telling time (e.g., 'quarter to four') or producing house numbers (e.g., 'two hundred forty-five') has been almost completely ignored. Yet, adult
Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition
Wang, H.
2017-05-26
The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.
Abhishek Khanna
2012-01-01
Full Text Available We revisit the problem of optimal power extraction in four-step cycles (two adiabatic and two heat-transfer branches when the finite-rate heat transfer obeys a linear law and the heat reservoirs have finite heat capacities. The heat-transfer branch follows a polytropic process in which the heat capacity of the working fluid stays constant. For the case of ideal gas as working fluid and a given switching time, it is shown that maximum work is obtained at Curzon-Ahlborn efficiency. Our expressions clearly show the dependence on the relative magnitudes of heat capacities of the fluid and the reservoirs. Many previous formulae, including infinite reservoirs, infinite-time cycles, and Carnot-like and non-Carnot-like cycles, are recovered as special cases of our model.
Component Prioritization Schema for Achieving Maximum Time and Cost Benefits from Software Testing
Srivastava, Praveen Ranjan; Pareek, Deepak
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Defining the end of software testing represents crucial features of any software development project. A premature release will involve risks like undetected bugs, cost of fixing faults later, and discontented customers. Any software organization would want to achieve maximum possible benefits from software testing with minimum resources. Testing time and cost need to be optimized for achieving a competitive edge in the market. In this paper, we propose a schema, called the Component Prioritization Schema (CPS), to achieve an effective and uniform prioritization of the software components. This schema serves as an extension to the Non Homogenous Poisson Process based Cumulative Priority Model. We also introduce an approach for handling time-intensive versus cost-intensive projects.
Maximum Likelihood Timing and Carrier Synchronization in Burst-Mode Satellite Transmissions
Morelli Michele
2007-01-01
Full Text Available This paper investigates the joint maximum likelihood (ML estimation of the carrier frequency offset, timing error, and carrier phase in burst-mode satellite transmissions over an AWGN channel. The synchronization process is assisted by a training sequence appended in front of each burst and composed of alternating binary symbols. The use of this particular pilot pattern results into an estimation algorithm of affordable complexity that operates in a decoupled fashion. In particular, the frequency offset is measured first and independently of the other parameters. Timing and phase estimates are subsequently computed through simple closed-form expressions. The performance of the proposed scheme is investigated by computer simulation and compared with Cramer-Rao bounds. It turns out that the estimation accuracy is very close to the theoretical limits up to relatively low signal-to-noise ratios. This makes the algorithm well suited for turbo-coded transmissions operating near the Shannon limit.
Maximum Likelihood Timing and Carrier Synchronization in Burst-Mode Satellite Transmissions
Michele Morelli
2007-06-01
Full Text Available This paper investigates the joint maximum likelihood (ML estimation of the carrier frequency offset, timing error, and carrier phase in burst-mode satellite transmissions over an AWGN channel. The synchronization process is assisted by a training sequence appended in front of each burst and composed of alternating binary symbols. The use of this particular pilot pattern results into an estimation algorithm of affordable complexity that operates in a decoupled fashion. In particular, the frequency offset is measured first and independently of the other parameters. Timing and phase estimates are subsequently computed through simple closed-form expressions. The performance of the proposed scheme is investigated by computer simulation and compared with Cramer-Rao bounds. It turns out that the estimation accuracy is very close to the theoretical limits up to relatively low signal-to-noise ratios. This makes the algorithm well suited for turbo-coded transmissions operating near the Shannon limit.
Mlpnp - a Real-Time Maximum Likelihood Solution to the Perspective-N Problem
Urban, S.; Leitloff, J.; Hinz, S.
2016-06-01
In this paper, a statistically optimal solution to the Perspective-n-Point (PnP) problem is presented. Many solutions to the PnP problem are geometrically optimal, but do not consider the uncertainties of the observations. In addition, it would be desirable to have an internal estimation of the accuracy of the estimated rotation and translation parameters of the camera pose. Thus, we propose a novel maximum likelihood solution to the PnP problem, that incorporates image observation uncertainties and remains real-time capable at the same time. Further, the presented method is general, as is works with 3D direction vectors instead of 2D image points and is thus able to cope with arbitrary central camera models. This is achieved by projecting (and thus reducing) the covariance matrices of the observations to the corresponding vector tangent space.
JIN Qibing; LIU Qie; WANG Qi; TIAN Yuqi; WANG Yuanfei
2013-01-01
The IMC (Internal Model Control) controller based on robust tuning can improve the robustness and dynamic performance of the system.In this paper,the robustness degree of the control system is investigated based on Maximum Sensitivity (Ms) in depth.And the analytical relationship is obtained between the robustness specification and controller parameters,which gives a clear design criterion to robust IMC controller.Moreover,a novel and simple IMC-PID (Proportional-Integral-Derivative) tuning method is proposed by converting the IMC controller to PID form in terms of the time domain rather than the frequency domain adopted in some conventional IMC-based methods.Hence,the presented IMC-PID gives a good performance with a specific robustness degree.The new IMC-PID method is compared with other classical IMC-PID rules,showing the flexibility and feasibility for a wide range of plants.
Glottal closure instant and voice source analysis using time-scale lines of maximum amplitude
Christophe D’Alessandro; Nicolas Sturmel
2011-10-01
1Time-scale representation of voiced speech is applied to voice quality analysis, by introducing the Line of Maximum Amplitude (LoMA) method. This representation takes advantage of the tree patterns observed for voiced speech periods in the time-scale domain. For each period, the optimal LoMA is computed by linking amplitude maxima at each scale of a wavelet transform, using a dynamic programming algorithm. A time-scale analysis of the linear acoustic model of speech production shows several interesting properties. The LoMA points to the glottal closure instants. The LoMA phase delay is linked to the voice open quotient. The cumulated amplitude along the LoMA is related to voicing amplitude. The LoMA spectral centre of gravity is an indication of voice spectral tilt. Following these theoretical considerations, experimental results are reported. Comparative evaluation demonstrates that the LoMA is an effective method for the detection of Glottal Closure Instants (GCI). The effectiveness of LoMA analysis for open quotient, amplitude and spectral tilt estimations is also discussed with the help of some examples.
Time-optimal excitation of maximum quantum coherence: Physical limits and pulse sequences
Köcher, S. S.; Heydenreich, T.; Zhang, Y.; Reddy, G. N. M.; Caldarelli, S.; Yuan, H.; Glaser, S. J.
2016-04-01
Here we study the optimum efficiency of the excitation of maximum quantum (MaxQ) coherence using analytical and numerical methods based on optimal control theory. The theoretical limit of the achievable MaxQ amplitude and the minimum time to achieve this limit are explored for a set of model systems consisting of up to five coupled spins. In addition to arbitrary pulse shapes, two simple pulse sequence families of practical interest are considered in the optimizations. Compared to conventional approaches, substantial gains were found both in terms of the achieved MaxQ amplitude and in pulse sequence durations. For a model system, theoretically predicted gains of a factor of three compared to the conventional pulse sequence were experimentally demonstrated. Motivated by the numerical results, also two novel analytical transfer schemes were found: Compared to conventional approaches based on non-selective pulses and delays, double-quantum coherence in two-spin systems can be created twice as fast using isotropic mixing and hard spin-selective pulses. Also it is proved that in a chain of three weakly coupled spins with the same coupling constants, triple-quantum coherence can be created in a time-optimal fashion using so-called geodesic pulses.
Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun
2014-01-01
We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.
Chiuh Cheng Chyu
2012-06-01
Full Text Available This paper studies the unrelated parallel machine scheduling problem with three minimization objectives – makespan, maximum earliness, and maximum tardiness (MET-UPMSP. The last two objectives combined are related to just-in-time (JIT performance of a solution. Three hybrid algorithms are presented to solve the MET-UPMSP: reactive GRASP with path relinking, dual-archived memetic algorithm (DAMA, and SPEA2. In order to improve the solution quality, min-max matching is included in the decoding scheme for each algorithm. An experiment is conducted to evaluate the performance of the three algorithms, using 100 (jobs x 3 (machines and 200 x 5 problem instances with three combinations of two due date factors – tight and range. The numerical results indicate that DAMA performs best and GRASP performs second for most problem instances in three performance metrics: HVR, GD, and Spread. The experimental results also show that incorporating min-max matching into decoding scheme significantly improves the solution quality for the two population-based algorithms. It is worth noting that the solutions produced by DAMA with matching decoding can be used as benchmark to evaluate the performance of other algorithms.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-02-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
The timing of the maximum extent of the Rhone Glacier at Wangen a.d. Aare
Ivy-Ochs, S.; Schluechter, C. [Bern Univ. (Switzerland); Kubik, P.W. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Beer, J. [EAWAG, Duebendorf (Switzerland)
1997-09-01
Erratic blocks found in the region of Wangen a.d. Aare delineate the maximum position of the Solothurn lobe of the Rhone Glacier. {sup 10}Be and {sup 26}Al exposure ages of three of these blocks show that the glacier withdraw from its maximum position at or slightly before 20,000{+-}1800 years ago. (author) 1 fig., 5 refs.
Jat, Prahlad; Serre, Marc L
2016-12-01
Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R(2) by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles.
Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications
Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz
2016-12-01
This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledge of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.
Guo-Jheng Yang
2013-08-01
Full Text Available The fragile watermarking technique is used to protect intellectual property rights while also providing security and rigorous protection. In order to protect the copyright of the creators, it can be implanted in some representative text or totem. Because all of the media on the Internet are digital, protection has become a critical issue, and determining how to use digital watermarks to protect digital media is thus the topic of our research. This paper uses the Logistic map with parameter u = 4 to generate chaotic dynamic behavior with the maximum entropy 1. This approach increases the security and rigor of the protection. The main research target of information hiding is determining how to hide confidential data so that the naked eye cannot see the difference. Next, we introduce one method of information hiding. Generally speaking, if the image only goes through Arnold’s cat map and the Logistic map, it seems to lack sufficient security. Therefore, our emphasis is on controlling Arnold’s cat map and the initial value of the chaos system to undergo small changes and generate different chaos sequences. Thus, the current time is used to not only make encryption more stringent but also to enhance the security of the digital media.
Ertas, Gokhan; Gulcur, H Ozcan; Tunaci, Mehtap
2008-05-01
Effectiveness of morphological descriptors based on normalized maximum intensity-time ratio (nMITR) maps generated using a 3 x 3 pixel moving mask on dynamic contrast-enhanced magnetoresistance (MR) mammograms are studied for assessment of malignancy. After a rough indication of volume of interest on the nMITR maps, lesions are automatically segmented. Two-dimensional (2D) convexity, normalized complexity, extent, and eccentricity as well as three-dimensional (3D) versions of these descriptors and contact surface area ratio are computed. On a data set consisting of dynamic contrast-enhanced MR DCE-MR mammograms from 51 women that contain 26 benign and 32 malignant lesions, 3D convexity, complexity, and extent are found to reflect aggressiveness of malignancy better than 2D descriptors. Contact surface area ratio which is easily adaptable to different imaging resolutions is found to be the most significant and accurate descriptor (75% sensitivity, 88% specificity, 89% positive predictive values, and 74% negative predictive values).
The Maximum Coping Time Analysis of the ELAP for the OPR1400
Shin, Sung Hyun; Hah, Chang Joo [KINGS, Ulsan (Korea, Republic of); Jung, Si Chae; Lee, Chang Gyun [KEPCO E and C, Daejeon (Korea, Republic of)
2014-05-15
There have been many evaluations and recommendations for the extended Station Black Out (SBO) condition of the nuclear power plant. For example, the 'SECY-11-0093/0137', is a recommendation of NRC and the 'WCAP-17601-P' is an evaluation of the PWROG. The extended loss of AC power (ELAP) can be defined as same with the extended (or prolonged) SBO which has a Loss of Offsite Power (LOOP) condition and loss of all Emergency Diesel Generators (EDG), Alternative Alternating Current (AAC), but Direct Current (DC) source is available. This evaluation provides NSSS responses to an ELAP for the OPR1000 unit. And the results presented provide certain phenomena which occur during the ELAP, the maximum coping time until a core uncovery condition. It is assumed for this case that sufficient SG secondary makeup inventory exists or can be attained, so that the duration of the ELAP prior to core damage is dependent solely upon the loss of inventory from the RCS. Even with a limited RCS cooldown and depressurization, and conservatively high assumed RCP seal leakage, the plant can be sustained for over 65 hours prior to core uncovery.
Johann A. Briffa
2014-06-01
Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.
Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code
Adel Ahmadi
2015-01-01
Full Text Available Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML detection approach for quasi-orthogonal space-time block code (QOSTBC. The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric into independent positive parts and a positive interference part. Search spaces of symbols are substantially reduced by employing the independent parts and statistics of noise. Symbols within the search spaces are successively evaluated until the metric is minimized. Simulation results confirm that the proposed decoder’s performance is superior to many of the recently published state-of-the-art solutions in terms of complexity level. More specifically, it was possible to verify that application of the new algorithms with 1024-QAM would decrease the computational complexity compared to state-of-the-art solution with 16-QAM.
Susanne Wegener
Full Text Available After recanalization, cerebral blood flow (CBF can increase above baseline in cerebral ischemia. However, the significance of post-ischemic hyperperfusion for tissue recovery remains unclear. To analyze the course of post-ischemic hyperperfusion and its impact on vascular function, we used magnetic resonance imaging (MRI with pulsed arterial spin labeling (pASL and measured CBF quantitatively during and after a 60 minute transient middle cerebral artery occlusion (MCAO in adult rats. We added a 5% CO2 - challenge to analyze vasoreactivity in the same animals. Results from MRI were compared to histological correlates of angiogenesis. We found that CBF in the ischemic area recovered within one day and reached values significantly above contralateral thereafter. The extent of hyperperfusion changed over time, which was related to final infarct size: early (day 1 maximal hyperperfusion was associated with smaller lesions, whereas a later (day 4 maximum indicated large lesions. Furthermore, after initial vasoparalysis within the ischemic area, vasoreactivity on day 14 was above baseline in a fraction of animals, along with a higher density of blood vessels in the ischemic border zone. These data provide further evidence that late post-ischemic hyperperfusion is a sequel of ischemic damage in regions that are likely to undergo infarction. However, it is transient and its resolution coincides with re-gaining of vascular structure and function.
Effects of preload 4 repetition maximum on 100-m sprint times in collegiate women.
Linder, Elizabeth E; Prins, Jan H; Murata, Nathan M; Derenne, Coop; Morgan, Charles F; Solomon, John R
2010-05-01
The purpose of this study was to determine the effects of postactivation potentiation (PAP) on track-sprint performance after a preload set of 4 repetition maximum (4RM) parallel back half-squat exercises in collegiate women. All subjects (n = 12) participated in 2 testing sessions over a 3-week period. During the first testing session, subjects performed the Controlled protocol consisting of a 4-minute standardized warm-up, followed by a 4-minute active rest, a 100-m track sprint, a second 4-minute active rest, finalized with a second 100-m sprint. The second testing session, the Treatment protocol, consisted of a 4-minute standardized warm-up, followed by 4-minute active rest, sprint, a second 4-minute active rest, a warm-up of 4RM parallel back half-squat, a third 9-minute active rest, finalized with a second sprint. The results indicated that there was a significant improvement of 0.19 seconds (p sprint was preceded by a 4RM back-squat protocol during Treatment. The standardized effect size, d, was 0.82, indicating a large effect size. Additionally, the results indicated that it would be expected that mean sprint times would increase 0.04-0.34 seconds (p 0.05). The findings suggest that performing a 4RM parallel back half-squat warm-up before a track sprint will have a positive PAP affect on decreased track-sprint times. Track coaches, looking for the "competitive edge" (PAP effect) may re-warm up their sprinters during meets.
Jiang Zhu
2014-01-01
Full Text Available Some delta-nabla type maximum principles for second-order dynamic equations on time scales are proved. By using these maximum principles, the uniqueness theorems of the solutions, the approximation theorems of the solutions, the existence theorem, and construction techniques of the lower and upper solutions for second-order linear and nonlinear initial value problems and boundary value problems on time scales are proved, the oscillation of second-order mixed delat-nabla differential equations is discussed and, some maximum principles for second order mixed forward and backward difference dynamic system are proved.
FlowMax: A Computational Tool for Maximum Likelihood Deconvolution of CFSE Time Courses.
Maxim Nikolaievich Shokhirev
Full Text Available The immune response is a concerted dynamic multi-cellular process. Upon infection, the dynamics of lymphocyte populations are an aggregate of molecular processes that determine the activation, division, and longevity of individual cells. The timing of these single-cell processes is remarkably widely distributed with some cells undergoing their third division while others undergo their first. High cell-to-cell variability and technical noise pose challenges for interpreting popular dye-dilution experiments objectively. It remains an unresolved challenge to avoid under- or over-interpretation of such data when phenotyping gene-targeted mouse models or patient samples. Here we develop and characterize a computational methodology to parameterize a cell population model in the context of noisy dye-dilution data. To enable objective interpretation of model fits, our method estimates fit sensitivity and redundancy by stochastically sampling the solution landscape, calculating parameter sensitivities, and clustering to determine the maximum-likelihood solution ranges. Our methodology accounts for both technical and biological variability by using a cell fluorescence model as an adaptor during population model fitting, resulting in improved fit accuracy without the need for ad hoc objective functions. We have incorporated our methodology into an integrated phenotyping tool, FlowMax, and used it to analyze B cells from two NFκB knockout mice with distinct phenotypes; we not only confirm previously published findings at a fraction of the expended effort and cost, but reveal a novel phenotype of nfkb1/p105/50 in limiting the proliferative capacity of B cells following B-cell receptor stimulation. In addition to complementing experimental work, FlowMax is suitable for high throughput analysis of dye dilution studies within clinical and pharmacological screens with objective and quantitative conclusions.
The relativity of time perception produced by facial emotion stimuli.
Lee, Kwang-Hyuk; Seelam, Kalyan; O'Brien, Tom
2011-12-01
We systematically examined the impact of emotional stimuli on time perception in a temporal reproduction paradigm where participants reproduced the duration of a facial emotion stimulus using an oval-shape stimulus or vice versa. Experiment 1 asked participants to reproduce the duration of an angry face (or the oval) presented for 2,000 ms. Experiment 2 included a range of emotional expressions (happy, sad, angry, and neutral faces as well as the oval stimulus) presented for different durations (500, 1,500, and 2,000 ms). We found that participants over-reproduced the durations of happy and sad faces using the oval stimulus. By contrast, there was a trend of under-reproduction when the duration of the oval stimulus was reproduced using the angry face. We suggest that increased attention to a facial emotion produces the relativity of time perception.
Becker, L. W. M.; Sejrup, H. P.; Hjelstuen, B. O. B.; Haflidason, H.
2016-12-01
The extent of the NW European ice sheet during the Last Glacial Maximum is fairly well constrained to, at least in periods, the shelf edge. However, the exact timing and varying activity of the largest ice stream, the Norwegian Channel Ice Stream (NCIS), remains uncertain. We here present three sediment records, recovered proximal and distal to the upper NW European continental slope. All age models for the cores are constructed in the same way and based solely on 14C dating of planktonic foraminifera. The sand-sized sediments in the discussed cores is believed to be primarily transported by ice rafting. All records suggest ice streaming activity between 25.8 and 18.5 ka BP. However, the core proximal to the mouth of the Norwegian Channel (NC) shows distinct periods of activity and periods of very little coarse sediment input. Out of this there appear to be at least three well-defined periods of ice streaming activity which lasted each for 1.5 to 2 ka, with "pauses" of several hundred years in between. The same core shows a conspicuous variation in several proxies and sediment colour within the first peak of ice stream activity, compared to the second and third peak. The light grey colour of the sediment was earlier attributed to Triassic chalk grains, yet all "chalk" grains are in fact mollusc fragments. The low magnetic susceptibility values, the high Ca, high Sr and low Fe content compared to the other peaks suggests a different provenance for the material of the first peak. We suggest therefore, that the origin of this material is rather the British Irish Ice Sheet (BIIS) and not the Fennoscandian Ice Sheet (FIS). Earlier studies have shown an extent of the BIIS at least to the NC, whereas ice from the FIS likely stayed within the boundaries of the NC. A possible scenario for the different provenance could therefore be the build-up of the BIIS into the NC until it merged with the FIS. At this point the BIIS calved off the shelf edge southwest of the mouth of
Wu, Feilong; He, Jizhou; Ma, Yongli; Wang, Jianhui
2014-12-01
We consider the efficiency at maximum power of a quantum Otto engine, which uses a spin or a harmonic system as its working substance and works between two heat reservoirs at constant temperatures Th and Tc (Otto engine working in the linear-response regime.
Djeison Cesar Batista
2011-09-01
Full Text Available Thermal rectification of wood was developed in the decade of 1940 and has been largely studied and produced in Europe. In Brazil, the research about this technique is still little and sparse, but it has gained attention nowadays. The aim of this study was to evaluate the influence of time and temperature of rectification on the reduction of maximum swelling of Eucalyptus grandis wood. According to the results obtained it is possible to achieve reductions of about 50% on the maximum volumetric swelling of Eucalyptus grandis wood. Best results were obtained for 230°C of thermal rectification rather than 200°C. The factor temperature was more significant than time, once that there was no significant difference between the times used (1, 2 and 3 hours. There was no significant interaction between the factors time and temperature.
Kozlowski, Dawid; Worthington, Dave
2015-01-01
Many public healthcare systems struggle with excessive waiting lists for elective patient treatment. Different countries address this problem in different ways, and one interesting method entails a maximum waiting time guarantee. Introduced in Denmark in 2002, it entitles patients to treatment at...... by hospital planners and strategic decision makers....... at a private hospital in Denmark or at a hospital abroad if the public healthcare system is unable to provide treatment within the stated maximum waiting time guarantee. Although clearly very attractive in some respects, many stakeholders have been very concerned about the negative consequences of the policy...... on the utilization of public hospital resources. This paper illustrates the use of a queue modelling approach in the analysis of elective patient treatment governed by the maximum waiting time policy. Drawing upon the combined strengths of analytic and simulation approaches we develop both continuous-time Markov...
MIN Htwe, Y. M.
2016-12-01
Myanmar has suffered many times from earthquake disasters and four times from tsunamis according to historical data. The purpose of this study is to estimate the tsunami arrival time and maximum tsunami wave amplitude for the Rakhine coast of Myanmar using the TUNAMI F1 model. In this study I calculate the tsunami arrival time and maximum tsunami wave amplitude based on a tsunamigenic earthquake source of moment magnitude 8.5 in the Arakan subduction zone off the west-coast of Myanmar, using the TUNAMI F1 model, selecting eight points on Rakhine coast. The model result indicates that the tsunami waves would first hit Kyaukpyu on the Rakhine coast about 0.05 minutes after the onset of a magnitude 8.5 earthquake, and the maximum tsunami wave amplitude would be 2.37 meters.
Nezhel'skaya, L. A.
2016-09-01
A flow of physical events (photons, electrons, and other elementary particles) is studied. One of the mathematical models of such flows is the modulated MAP flow of events circulating under conditions of unextendable dead time period. It is assumed that the dead time period is an unknown fixed value. The problem of estimation of the dead time period from observations of arrival times of events is solved by the method of maximum likelihood.
A. E. Santoro
2013-03-01
Full Text Available Nitrite (NO2– is a substrate for both oxidative and reductive microbial metabolism. NO2– accumulates at the base of the euphotic zone in oxygenated, stratified open ocean water columns, forming a feature known as the primary nitrite maximum (PNM. Potential pathways of NO2– production include the oxidation of ammonia (NH3 by ammonia-oxidizing bacteria or archaea and assimilatory nitrate (NO3– reduction by phytoplankton or heterotrophic bacteria. Measurements of NH3 oxidation and NO3– reduction to NO2– were conducted at two stations in the central California Current in the eastern North Pacific to determine the relative contributions of these processes to NO2– production in the PNM. Sensitive (−1, high-resolution measurements of [NH4+] and [NO2–] indicated a persistent NH4+ maximum overlying the PNM at every station, with concentrations as high as 1.5 μmol L−1. Within and just below the PNM, NH3 oxidation was the dominant NO2– producing process with rates of NH3 oxidation of up to 50 nmol L−1 d−1, coinciding with high abundances of ammonia-oxidizing archaea. Though little NO2– production from NO3– was detected, potentially nitrate-reducing phytoplankton (photosynthetic picoeukaryotes, Synechococcus, and Prochlorococcus were present at the depth of the PNM. Rates of NO2– production from NO3– were highest within the upper mixed layer (4.6 nmol L−1 d−1 but were either below detection limits or 10 times lower than NH3 oxidation rates around the PNM. One-dimensional modeling of water column NO2– profiles supported direct rate measurements of a net biological sink for NO2– just below the PNM. Residence time estimates of NO2– within the PNM were similar at the mesotrophic and oligotrophic stations and ranged from 150–205 d. Our results suggest the PNM is a dynamic, rather than relict, feature with a source term dominated by ammonia oxidation.
Shen, Hua
2016-10-19
A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.
Vuillemin, Aurele; Ariztegui, Daniel; Leavitt, Peter R.; Bunting, Lynda
2014-05-01
subsaline conditions producing methane with a high potential of organic matter degradation. In contrast, sediments rich in volcanic detritus from the Last Glacial Maximum showed a substantial presence of lithotrophic microorganisms and sulphate-reducing bacteria mediating authigenic minerals. Together, these features suggested that microbial communities developed in response to climatic control of lake and catchment productivity at the time of sediment deposition. Prevailing climatic conditions exerted a hierarchical control on the microbial composition of lake sediments by regulating the influx of organic and inorganic material to the lake basin, which in turn determined water column chemistry, production and sedimentation of particulate material, resulting in the different niches sheltering these microbial assemblages. Moreover, it demonstrated that environmental DNA can constitute sedimentary archives of phylogenetic diversity and diagenetic processes over tens of millennia.
Henning Grosse Ruse-Khan
2009-07-01
Full Text Available International intellectual property (IP protection is at the heart of controversies over the impact of economic interests on social or environmental concerns. Some see IP rights as unduly encroaching upon human rights and societal interests, others argue for stronger enforcement and additional exclusivity to incentivize new innovations and creations. Underlying these debates is the perception that international IP treaties set out minimum standards of protection - which presumably allow for additional protection with only the sky being the limit. This article challenges this view and explores the idea of maximum standards or ceilings within the existing body of international IP law. It looks at the relation between IP treaties and subsequent agreements or national laws which offer stronger protection. In particular, within the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS, an important qualification may serve as a door opener for ceilings: While additional IP protection may not go beyond mandatory limits within TRIPS, the qualification not to “contravene” TRIPS is unlikely to safeguard TRIPS flexibilities against TRIPS-plus norms. The article further identifies and examines the rationales for maximum standards in international IP protection as: (1 Legal security and predictability about the boundaries of protection; (2 the global protection of users’ rights; and (3 the free movement of goods, services and information. Examples of mandatory limits in the existing IP treaties and in ongoing initiatives can implement these. However, most of the relevant treaty norms are optional. The article concludes with some observations on the need for more comprehensive and precise maximum standards.
Wu, Feilong; He, Jizhou; Ma, Yongli; Wang, Jianhui
2014-12-01
We consider the efficiency at maximum power of a quantum Otto engine, which uses a spin or a harmonic system as its working substance and works between two heat reservoirs at constant temperatures T(h) and T(c) (power based on these two different kinds of quantum systems are bounded from the upper side by the same expression η(mp)≤η(+)≡η(C)(2)/[η(C)-(1-η(C))ln(1-η(C))] with η(C)=1-T(c)/T(h) as the Carnot efficiency. This expression η(mp) possesses the same universality of the CA efficiency η(CA)=1-√(1-η(C)) at small relative temperature difference. Within the context of irreversible thermodynamics, we calculate the Onsager coefficients and show that the value of η(CA) is indeed the upper bound of EMP for an Otto engine working in the linear-response regime.
Oudyn, Frederik W; Lyons, David J; Pringle, M J
2012-01-01
Many scientific laboratories follow, as standard practice, a relatively short maximum holding time (within 7 days) for the analysis of total suspended solids (TSS) in environmental water samples. In this study we have subsampled from bulk water samples stored at ∼4 °C in the dark, then analysed for TSS at time intervals up to 105 days after collection. The nonsignificant differences in TSS results observed over time demonstrates that storage at ∼4 °C in the dark is an effective method of preserving samples for TSS analysis, far past the 7-day standard practice. Extending the maximum holding time will ease the pressure on sample collectors and laboratory staff who until now have had to determine TSS within an impractically short period.
The effects of disjunct sampling and averaging time on maximum mean wind speeds
Larsén, Xiaoli Guo; Mann, J.
2006-01-01
Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...... period before being saved. In either case, the extreme wind will be underestimated. This paper investigates the effects of the disjunct sampling interval and the averaging time on the attenuation of the extreme wind estimation by means of a simple theoretical approach as well as measurements...
ANALYTICAL ESTIMATION OF MINIMUM AND MAXIMUM TIME EXPENDITURES OF PASSENGERS AT AN URBAN ROUTE STOP
Gorbachov, P.
2013-01-01
Full Text Available This scientific paper deals with the problem related to the definition of average time spent by passengers while waiting for transport vehicles at urban stops as well as the results of analytical modeling of this value at traffic schedule unknown to the passengers and of two options of the vehicle traffic management on the given route.
The Round-Robin Mock Interview: Maximum Learning in Minimum Time
Marks, Melanie; O'Connor, Abigail H.
2006-01-01
Interview skills is critical to a job seeker's success in obtaining employment. However, learning interview skills takes time. This article offers an activity for providing students with interview practice while sacrificing only a single classroom period. The authors begin by reviewing relevant literature. Then, they outline the process of…
Lee, C.-H.; Herget, C. J.
1976-01-01
This short paper considers the parameter-identification problem of general discrete-time, nonlinear, multiple input-multiple output dynamic systems with Gaussian white distributed measurement errors. Knowledge of the system parameterization is assumed to be available. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems.
Murray, M P; Baldwin, J M; Gardner, G M; Sepic, S B; Downs, W J
1977-06-01
Isometric torque of the knee flexor and extensor muscles were recorded for 5 seconds at three knee joint positions. The subjects included healthy men in age groups from 20 to 35 and 45 to 65 years of age. The amplitudes and duration of peak torque and the time to peak torque were measured for each contraction. Peak torque was usually maintaned less than 0.1 second and never longer than 0.9 second. At each of the three angles, the mean extensor muscle torque was higher than the mean flexor muscle torque in both age groups, and the mean torque for both muscle group was higher among the younger than among the older man. The highest average torque was recorded at the knee angle of 60 degrees for the extensor muscles and 45 degrees for the flexor muscles, but this was not always a stereotyped response either for a given individual or among individuals.
Timing A Pulsed Thin Film Pyroelectric Generator For Maximum Power Denisty
Smith, A. N.; Hanrahan, B. M.; Neville, C. J.; Jankowski, N. R.
2016-11-01
Pyroelectric thermal-to-electric energy conversion is accomplished by a cyclic process of thermally-inducing polarization changes in the material under an applied electric field. The pyroelectric MEMS device investigated consisted of a thin film PZT capacitor with platinum bottom and iridium oxide top electrodes. Electric fields between 1-20 kV/cm with a 30% duty cycle and frequencies from 0.1 - 100 Hz were tested with a modulated continuous wave IR laser with a duty cycle of 20% creating temperature swings from 0.15 - 26 °C on the pyroelectric receiver. The net output power of the device was highly sensitive to the phase delay between the laser power and the applied electric field. A thermal model was developed to predict and explain the power loss associated with finite charge and discharge times. Excellent agreement was achieved between the theoretical model and the experiment results for the measured power density versus phase delay. Limitations on the charging and discharging rates result in reduced power and lower efficiency due to a reduced net work per cycle.
Time-Correlated Particles Produced by Cosmic Rays
Chapline, George F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glenn, Andrew M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nakae, Les F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pawelczak, Iwona [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Snyderman, Neal J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sheets, Steven A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wurtz, Ron E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-05-06
This report describes the NA-22 supported cosmic ray experimental and analysis activities carried out at LLNL since the last report, dated October 1, 2013. In particular we report on an analysis of the origin of the plastic scintillator signals resembling the signals produced by minimum ionizing particles (MIPs). Our most notable result is that when measured in coincidence with a liquid scintillator neutron signal the MIP-like signals in the plastic scintillators are mainly due to high energy tertiary neutrons.
Biomechanical events in the time to exhaustion at maximum aerobic speed.
Gazeau, F; Koralsztein, J P; Billat, V
1997-10-01
Recent studies reported good intra-individual reproducibility, but great inter-individual variation in a sample of elite athletes, in time to exhaustion (tlim) at the maximal aerobic speed (MAS: the lowest speed that elicits VO2max in an incremental treadmill test). The purpose of the present study was, on the one hand, to detect modifications of kinematic variables at the end of the tlim of the VO2max test and, on the other hand, to evaluate the possibility that such modifications were factors responsible for the inter-individual variability in tlim. Eleven sub-elite male runners (Age = 24 +/- 6 years; VO2max = 69.2 +/- 6.8 ml kg-1 min-1; MAS = 19.2 +/- 1.45 km h-1; tlim = 301.9 +/- 82.7 s) performed two exercise tests on a treadmill (0% slope): an incremental test to determine VO2max and MAS, and an exhaustive constant velocity test to determine tlim at MAS. Statistically significant modifications were noted in several kinematic variables. The maximal angular velocity of knee during flexion was the only variable that was both modified through the tlim test and influenced the exercise duration. A multiple correlation analysis showed that tlim was predicted by the modifications of four variables (R = 0.995, P < 0.01). These variables are directly or indirectly in relation with the energic cost of running. It was concluded that runners who demonstrated stable running styles were able to run longer during MAS test because of optimal motor efficiency.
Leus, G.; Petré, F.; Moonen, M.
2004-01-01
In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI). Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input
Leus, G.; Petré, F.; Moonen, M.
2004-01-01
In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI). Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input multiple-
Coordination in coteaching: Producing alignment in real time
Roth, Wolff-Michael; Tobin, Kenneth; Carambo, Cristobal; Dalland, Chris
2005-07-01
In coteaching, two or more teachers take collective responsibility for enacting a curriculum together with their students. Past research provided some indication that in the course of coteaching, not only the teaching practices of the partners become increasingly alike but also do unconsciously produced ways of moving about the classroom, hand gestures, and body movements. In this study, we investigate the possible sources of occurrence for the coordination of social and physical practices and provide exemplary episodes at a fine-grained level from one coteaching pair. Drawing on key concepts from cultural sociology, we show how participants continuously create material and social resources that allow for new forms of agency in subsequent moments. Such resources include physical, temporal, and social spaces and meaning-making entities (language, inscriptions). We show how in productive coteaching, participants deploy and take advantage of these resources in synchronized and coordinated ways. The synchronization operates both at the temporal level, where coteachers work in concert like experienced jazz musicians in a jam session, and at a substantive level, where the practices of one look like those of the other. As coteachers generally are not aware that they adopt the ways of their partners as we articulate them here, there are considerable consequences, for better or worse, that arise from teaching with another person.
Santos W. N. dos
2003-01-01
Full Text Available The hot wire technique is considered to be an effective and accurate means of determining the thermal conductivity of ceramic materials. However, specifically for materials of high thermal diffusivity, the appropriate time interval to be considered in calculations is a decisive factor for getting accurate and consistent results. In this work, a numerical simulation model is proposed with the aim of determining the minimum and maximum measuring time for the hot wire parallel technique. The temperature profile generated by this model is in excellent agreement with that one experimentally obtained by this technique, where thermal conductivity, thermal diffusivity and specific heat are simultaneously determined from the same experimental temperature transient. Eighteen different specimens of refractory materials and polymers, with thermal diffusivities ranging from 1x10-7 to 70x10-7 m²/s, in shape of rectangular parallelepipeds, and with different dimensions were employed in the experimental programme. An empirical equation relating minimum and maximum measuring times and the thermal diffusivity of the sample is also obtained.
Distinct timing mechanisms produce discrete and continuous movements.
Raoul Huys
2008-04-01
Full Text Available The differentiation of discrete and continuous movement is one of the pillars of motor behavior classification. Discrete movements have a definite beginning and end, whereas continuous movements do not have such discriminable end points. In the past decade there has been vigorous debate whether this classification implies different control processes. This debate up until the present has been empirically based. Here, we present an unambiguous non-empirical classification based on theorems in dynamical system theory that sets discrete and continuous movements apart. Through computational simulations of representative modes of each class and topological analysis of the flow in state space, we show that distinct control mechanisms underwrite discrete and fast rhythmic movements. In particular, we demonstrate that discrete movements require a time keeper while fast rhythmic movements do not. We validate our computational findings experimentally using a behavioral paradigm in which human participants performed finger flexion-extension movements at various movement paces and under different instructions. Our results demonstrate that the human motor system employs different timing control mechanisms (presumably via differential recruitment of neural subsystems to accomplish varying behavioral functions such as speed constraints.
Almog, Assaf
2014-01-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of time series of activity of their fundamental elements (such as stocks or neurons respectively). While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relationships between binary and non-binary properties of financial time series. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to replicate the observed binary/non-binary relations very well, and to mathematically...
Zhaoyong Mao
2016-01-01
Full Text Available This paper addresses the power generation control system of a new drag-type Vertical Axis Turbine with several retractable blades. The returning blades can be entirely hidden in the drum, and negative torques can then be considerably reduced as the drum shields the blades. Thus, the power efficiency increases. Regarding the control, a Linear Quadratic Tracking (LQT optimal control algorithm for Maximum Power Point Tracking (MPPT is proposed to ensure that the wave energy conversion system can operate highly effectively under fluctuating conditions and that the tracking process accelerates over time. Two-dimensional Computational Fluid Dynamics (CFD simulations are performed to obtain the maximum power points of the turbine’s output. To plot the tip speed ratio curve, the least squares method is employed. The efficacy of the steady and dynamic performance of the control strategy was verified using Matlab/Simulink software. These validation results show that the proposed system can compensate for power fluctuations and is effective in terms of power regulation.
Nechitailo, Galina S.; Yurov, S.; Cojocaru, A.; Revin, A.
The analysis of the lycopene and other carotenoids in tomatoes produced from seeds exposed under space flight conditions at the orbital station MIR for six years is presented in this work. Our previous experiments with tomato plants showed the germination of seeds to be 32%Genetic investigations revealed 18%in the experiment and 8%experiments were conducted to study the capacity of various stimulating factors to increase germination of seeds exposed for a long time to the action of space flight factors. An increase of 20%achieved but at the same time mutants having no analogues in the control variants were detected. For the present investigations of the third generation of plants produced from seeds stored for a long time under space flight conditions 80 tomatoes from forty plants were selected. The concentration of lycopene in the experimental specimens was 2.5-3 times higher than in the control variants. The spectrophotometric analysis of ripe tomatoes revealed typical three-peaked carotenoid spectra with a high maximum of lycopene (a medium maximum at 474 nm), a moderate maximum of its predecessor, phytoin, (a medium maximum at 267 nm) and a low maximum of carotenes. In green tomatoes, on the contrary, a high maximum of phytoin, a moderate maximum of lycopene and a low maximum of carotenes were observed. The results of the spectral analysis point to the retardation of biosynthesis of carotenes while the production of lycopene is increased and to the synthesis of lycopene from phytoin. Electric conduction of tomato juice in the experimental samples is increased thus suggesting higher amounts of carotenoids, including lycopene and electrolytes. The higher is the value of electric conduction of a specimen, the higher are the spectral maxima of lycopene. The hydrogen ion exponent of the juice of ripe tomatoes increases due to which the efficiency of ATP biosynthesis in cell mitochondria is likely to increase, too. The results demonstrating an increase in the content
Almog, Assaf; Garlaschelli, Diego
2014-09-01
The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false What is the maximum collect-on-delivery amount I may demand at the time of delivery? 375.703 Section 375.703 Transportation Other Regulations Relating... amount I may demand at the time of delivery? (a) On a binding estimate, the maximum amount is the exact...
?Just-in-Time? Battery Charge Depletion Control for PHEVs and E-REVs for Maximum Battery Life
DeVault, Robert C [ORNL
2009-01-01
Conventional methods of vehicle operation for Plug-in Hybrid Vehicles first discharge the battery to a minimum State of Charge (SOC) before switching to charge sustaining operation. This is very demanding on the battery, maximizing the number of trips ending with a depleted battery and maximizing the distance driven on a depleted battery over the vehicle s life. Several methods have been proposed to reduce the number of trips ending with a deeply discharged battery and also eliminate the need for extended driving on a depleted battery. An optimum SOC can be maintained for long battery life before discharging the battery so that the vehicle reaches an electric plug-in destination just as the battery reaches the minimum operating SOC. These Just-in-Time methods provide maximum effective battery life while getting virtually the same electricity from the grid.
Ivy-Ochs, Susan; Braakhekke, Jochem; Monegato, Giovanni; Gianotti, Franco; Forno, Gabriella; Hippe, Kristina; Christl, Marcus; Akçar, Naki; Schluechter, Christian
2017-04-01
The Last Glacial Maximum (LGM) in the Alps saw much of the mountains inundated by ice. Several main accumulation areas comprising local ice caps and plateau icefields fit into a picture of transection glaciers flowing into huge valley glaciers. In the north the valley glaciers covered long distances (hundreds of kilometers) to reach the forelands where they spread out in fan-shaped piedmont lobes tens of kilometers across, e.g. the Rhine glacier. In the south travel distances to the mountain front were often shorter, the pathway steeper. Nevertheless, not all glaciers even reached beyond the front, as the temperatures were notably warmer in the south. For example at Orta the glacier snout remained within the mountains. Where glaciers reached the forelands they stopped abruptly and the moraine amphitheaters were constructed, e.g. at Ivrea and Rivoli-Avigliana. Sets of stacked moraines built-up as glacier advance was directly confined by the older moraines. We may temporally and spatially identify the culmination of the last glacial cycle by pinpointing the outermost moraines that date to the LGM (generally about 26-24 ka). On the other hand, the timing of abandonment of foreland positions is given by ages of the innermost, often lake-bounding, moraines (about 19-18 ka). Between the two, glacier fluctuations left the stadial moraines. In the Linth-Rhine system three stadials have been recognized: Killwangen, Schlieren and Zurich. Nevertheless, already in the Swiss sector correlation of the LGM stadials among the several foreland lobes is not unambiguous. Across the Alps, not only north to south but also west to east, how do the timing and extent of glaciers during the LGM vary? Recent glacier modelling by Seguinot et al. (2017) informs and suggests the possibility of differences in timing for reaching of the maximum extent and for the number of oscillations of individual lobes during the LGM. At present few sites in the Alps have detailed enough geomorphological
Bellili, Faouzi; Meftehi, Rabii; Affes, Sofiene; Stephenne, Alex
2015-01-01
In this paper, we tackle for the first time the problem of maximum likelihood (ML) estimation of the signal-to-noise ratio (SNR) parameter over time-varying single-input multiple-output (SIMO) channels. Both the data-aided (DA) and the non-data-aided (NDA) schemes are investigated. Unlike classical techniques where the channel is assumed to be slowly time-varying and, therefore, considered as constant over the entire observation period, we address the more challenging problem of instantaneous (i.e., short-term or local) SNR estimation over fast time-varying channels. The channel variations are tracked locally using a polynomial-in-time expansion. First, we derive in closed-form expressions the DA ML estimator and its bias. The latter is subsequently subtracted in order to obtain a new unbiased DA estimator whose variance and the corresponding Cram\\'er-Rao lower bound (CRLB) are also derived in closed form. Due to the extreme nonlinearity of the log-likelihood function (LLF) in the NDA case, we resort to the expectation-maximization (EM) technique to iteratively obtain the exact NDA ML SNR estimates within very few iterations. Most remarkably, the new EM-based NDA estimator is applicable to any linearly-modulated signal and provides sufficiently accurate soft estimates (i.e., soft detection) for each of the unknown transmitted symbols. Therefore, hard detection can be easily embedded in the iteration loop in order to improve its performance at low to moderate SNR levels. We show by extensive computer simulations that the new estimators are able to accurately estimate the instantaneous per-antenna SNRs as they coincide with the DA CRLB over a wide range of practical SNRs.
Kremser, S.; Bodeker, G. E.; Lewis, J.
2014-01-01
A Climate Pattern-Scaling Model (CPSM) that simulates global patterns of climate change, for a prescribed emissions scenario, is described. A CPSM works by quantitatively establishing the statistical relationship between a climate variable at a specific location (e.g. daily maximum surface temperature, Tmax) and one or more predictor time series (e.g. global mean surface temperature, Tglobal) - referred to as the "training" of the CPSM. This training uses a regression model to derive fit coefficients that describe the statistical relationship between the predictor time series and the target climate variable time series. Once that relationship has been determined, and given the predictor time series for any greenhouse gas (GHG) emissions scenario, the change in the climate variable of interest can be reconstructed - referred to as the "application" of the CPSM. The advantage of using a CPSM rather than a typical atmosphere-ocean global climate model (AOGCM) is that the predictor time series required by the CPSM can usually be generated quickly using a simple climate model (SCM) for any prescribed GHG emissions scenario and then applied to generate global fields of the climate variable of interest. The training can be performed either on historical measurements or on output from an AOGCM. Using model output from 21st century simulations has the advantage that the climate change signal is more pronounced than in historical data and therefore a more robust statistical relationship is obtained. The disadvantage of using AOGCM output is that the CPSM training might be compromised by any AOGCM inadequacies. For the purposes of exploring the various methodological aspects of the CPSM approach, AOGCM output was used in this study to train the CPSM. These investigations of the CPSM methodology focus on monthly mean fields of daily temperature extremes (Tmax and Tmin). The methodological aspects of the CPSM explored in this study include (1) investigation of the advantage
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-06-16
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain's response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267
Kyungsoo Kim
2016-06-01
Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.
Rama, Aarti; Kesari, Shreekant; Das, Pradeep; Kumar, Vijay
2017-07-24
Extensive application of routine insecticide i.e., dichlorodiphenyltrichloroethane (DDT) to control Phlebotomus argentipes (Diptera: Psychodidae), the proven vector of visceral leishmaniasis in India, had evoked the problem of resistance/tolerance against DDT, eventually nullifying the DDT dependent strategies to control this vector. Because tolerating an hour-long exposure to DDT is not challenging enough for the resistant P. argentipes, estimating susceptibility by exposing sand flies to insecticide for just an hour becomes a trivial and futile task.Therefore, this bioassay study was carried out to investigate the maximum limit of exposure time to which DDT resistant P. argentipes can endure the effect of DDT for their survival. The mortality rate of laboratory-reared DDT resistant strain P. argentipes exposed to DDT was studied at discriminating time intervals of 60 min and it was concluded that highly resistant sand flies could withstand up to 420 min of exposure to this insecticide. Additionally, the lethal time for female P. argentipes was observed to be higher than for males suggesting that they are highly resistant to DDT's toxicity. Our results support the monitoring of tolerance limit with respect to time and hence points towards an urgent need to change the World Health Organization's protocol for susceptibility identification in resistant P. argentipes.
An absorption maximum was observed at 4.9 microns in infrared spectra of human parotid saliva. The factor causing this absorbance was found to be a...nitrate, and heat stability. Thiocyanate was then determined in 16 parotid saliva samples by a spectrophotometric method, which involved formation of
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Favre, Mario; Wyndham, Edmund; Veloso, Felipe; Bhuyan, Heman; Reyes, Sebastian; Ruiz, Hugo Marcelo; Caballero-Bendixsen, Luis Sebastian
2016-10-01
We present further detailed studies of the dynamics and plasma properties of a laser produced Carbon plasma expanding in a static axial magnetic field. The laser plasmas are produced in vacuum, 1 .10-6 Torr, using a graphite target, with a Nd:YAG laser, 3.5 ns, 340 mJ at 1.06 μm, focused at 2 .109 W/cm2, and propagate in static magnetic fields of maximum value 0.2 T. 15 ns time and spaced resolved OES is used to investigate plasma composition. 50 ns time resolved plasma imaging is used to visualize the plasma dynamics. A mm size B-dot probe is used, in combination with a Faraday cup, to characterize the interaction between the expanding plasma and the magnetic field. As a result of time and space correlated measurements, unique features of the laser plasma dynamics in the presence of the magnetic field are identified, which highlight the confinement effects of the static magnetic field Funded by project FONDECYT 1141119.
Chen Pоуu
2013-01-01
Full Text Available Products made overseas but sold in Taiwan are very common. Regarding the cross-border or interregional production and marketing of goods, inventory decision-makers often have to think about how to determine the amount of purchases per cycle, the number of transport vehicles, the working hours of each transport vehicle, and the delivery by ground or air transport to sales offices in order to minimize the total cost of the inventory in unit time. This model assumes that the amount of purchases for each order cycle should allow all rented vehicles to be fully loaded and the transport times to reach the upper limit within the time period. The main research findings of this study included the search for the optimal solution of the integer planning of the model and the results of sensitivity analysis.
Green, Cynthia L; Brownie, Cavell; Boos, Dennis D; Lu, Jye-Chyi; Krucoff, Mitchell W
2016-04-01
We propose a novel likelihood method for analyzing time-to-event data when multiple events and multiple missing data intervals are possible prior to the first observed event for a given subject. This research is motivated by data obtained from a heart monitor used to track the recovery process of subjects experiencing an acute myocardial infarction. The time to first recovery, T1, is defined as the time when the ST-segment deviation first falls below 50% of the previous peak level. Estimation of T1 is complicated by data gaps during monitoring and the possibility that subjects can experience more than one recovery. If gaps occur prior to the first observed event, T, the first observed recovery may not be the subject's first recovery. We propose a parametric gap likelihood function conditional on the gap locations to estimate T1 Standard failure time methods that do not fully utilize the data are compared to the gap likelihood method by analyzing data from an actual study and by simulation. The proposed gap likelihood method is shown to be more efficient and less biased than interval censoring and more efficient than right censoring if data gaps occur early in the monitoring process or are short in duration.
Energy spectra and fluence of the neutrons produced in deformed space-time conditions
Cardone, F.; Rosada, A.
2016-10-01
In this work, spectra of energy and fluence of neutrons produced in the conditions of deformed space-time (DST), due to the violation of the local Lorentz invariance (LLI) in the nuclear interactions are shown for the first time. DST-neutrons are produced by a mechanical process in which AISI 304 steel bars undergo a sonication using ultrasounds with 20 kHz and 330 W. The energy spectrum of the DST-neutrons has been investigated both at low (less than 0.4 MeV) and at high (up to 4 MeV) energy. We could conclude that the DST-neutrons have different spectra for different energy intervals. It is therefore possible to hypothesize that the DST-neutrons production presents peculiar features not only with respect to the time (asynchrony) and space (asymmetry) but also in the neutron energy spectra.
Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert
We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...
FENG Guolin; DONG Wenjie; GAO Hongxing
2005-01-01
The time-dependent solution of reduced air-sea coupling stochastic-dynamic model is accurately obtained by using the Fokker-Planck equation and the quantum mechanical method. The analysis of the timedependent solution suggests that when the climate system is in the ground state, the behavior of the system appears to be a Brownian movement, thus reasoning the foothold of Hasselmann's stochastic climatic model;when the system is in the first excitation state, the motion of the system exhibits a form of time-decaying,or under certain condition a periodic oscillation with the main period being 2.3 yr. At last, the results are used to discuss the impact of the doubling of carbon dioxide on climate.
2013-08-01
2 x Dose (2) CAMI (3) Medication Max Hrs Hrs Half-lives Interv Hrs Half-lives Eq Hrs Half-lives Codeine 4.0 24 6.0 8.0 2.0 15 3.6 Morphine 7.0 24...return-to-duty time, even for individuals on the extreme metabolic margins of the general population. The variation in t½ (calculated by the CAMI
Furbish, David J.; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan L.
2016-01-01
We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.
Shaolin Ji
2012-01-01
Full Text Available We study the optimal control problem of a controlled time-symmetric forward-backward doubly stochastic differential equation with initial-terminal state constraints. Applying the terminal perturbation method and Ekeland’s variation principle, a necessary condition of the stochastic optimal control, that is, stochastic maximum principle, is derived. Applications to backward doubly stochastic linear-quadratic control models are investigated.
Liu, Weidong
2009-01-01
In this paper, Cram\\'{e}r type moderate deviations for the maximum of the periodogram and its studentized version are derived. The results are then applied to a simultaneous testing problem in gene expression time series. It is shown that the level of the simultaneous tests is accurate provided that the number of genes $G$ and the sample size $n$ satisfy $G=\\exp(o(n^{1/3}))$.
Svendsen, Morten B S; Domenici, Paolo; Marras, Stefano;
2016-01-01
, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s(-1)), followed by barracuda (6.2±1.0 m s(-1)), little tunny (5.6±0.2 m s(-1)) and dorado (4...
Vanavil, B.; Krishna Chaitanya, K.; Seshagiri Rao, A.
2015-06-01
In this paper, a proportional-integral-derivative controller in series with a lead-lag filter is designed for control of the open-loop unstable processes with time delay based on direct synthesis method. Study of the performance of the designed controllers has been carried out on various unstable processes. Set-point weighting is considered to reduce the undesirable overshoot. The proposed scheme consists of only one tuning parameter, and systematic guidelines are provided for selection of the tuning parameter based on the peak value of the sensitivity function (Ms). Robustness analysis has been carried out based on sensitivity and complementary sensitivity functions. Nominal and robust control performances are achieved with the proposed method and improved closed-loop performances are obtained when compared to the recently reported methods in the literature.
Kodner Robin B
2010-10-01
Full Text Available Abstract Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service.
Electric Potential in a Dielectric Sphere Head Produced by a Time-Harmonic Equivalent Current Dipole
无
2007-01-01
A time-harmonic equivalent current dipole model is proposed to simulate EEG source which deals with the problem concerning the capacitance effect. The expressions of potentials in both homogeneous infinite dielectric medium and dielectric sphere on the electroquasistatic condition are presented. The potential in a 3-layer inhomogeneous spherical head is computed by using this model. The influences on potential produced by time-harmonic character and permittivity are discussed. The results show that potentials in dielectric sphere are affected by frequency and permittivity.
Grove, R. D.; Bowles, R. L.; Mayhew, S. C.
1972-01-01
A maximum likelihood parameter estimation procedure and program were developed for the extraction of the stability and control derivatives of aircraft from flight test data. Nonlinear six-degree-of-freedom equations describing aircraft dynamics were used to derive sensitivity equations for quasilinearization. The maximum likelihood function with quasilinearization was used to derive the parameter change equations, the covariance matrices for the parameters and measurement noise, and the performance index function. The maximum likelihood estimator was mechanized into an iterative estimation procedure utilizing a real time digital computer and graphic display system. This program was developed for 8 measured state variables and 40 parameters. Test cases were conducted with simulated data for validation of the estimation procedure and program. The program was applied to a V/STOL tilt wing aircraft, a military fighter airplane, and a light single engine airplane. The particular nonlinear equations of motion, derivation of the sensitivity equations, addition of accelerations into the algorithm, operational features of the real time digital system, and test cases are described.
Ronzhin, A., E-mail: ronzhin@fnal.gov [Fermilab, Batavia, Il 60510 (United States); Los, S.; Ramberg, E. [Fermilab, Batavia, Il 60510 (United States); Apresyan, A.; Xie, S.; Spiropulu, M. [California Institute of Technology, Pasadena, CA 91126 (United States); Kim, H. [University of Chicago, Chicago, Il 60637 (United States)
2015-09-21
We continue the study of micro-channel plate photomultiplier (MCP-PMT) as the active element of a shower maximum (SM) detector. We present test beam results obtained with Photek 240 and Photonis XP85011 MCP-PMTs devices. For proton beams, we obtained a time resolution of 9.6 ps, representing a significant improvement over past results using the same time of flight system. For electron beams, the time resolution obtained for this new type of SM detector is measured to be at the level of 13 ps when we use Photek 240 as the active element of the SM. Using the Photonis XP85011 MCP-PMT as the active element of the SM, we performed time resolution measurements with pixel readout, and achieved a TR better than 30 ps, The pixel readout was observed to improve upon the TR compared to the case where the individual channels were summed.
Morten B. S. Svendsen
2016-10-01
Full Text Available Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s−1 but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s−1, followed by barracuda (6.2±1.0 m s−1, little tunny (5.6±0.2 m s−1 and dorado (4.0±0.9 m s−1; although size-corrected performance was highest in little tunny and lowest in sailfish. Contrary to previously reported estimates, our results suggest that sailfish are incapable of exceeding swimming speeds of 10-15 m s−1, which corresponds to the speed at which cavitation is predicted to occur, with destructive consequences for fin tissues.
Svendsen, Morten B. S.; Domenici, Paolo; Marras, Stefano; Krause, Jens; Boswell, Kevin M.; Rodriguez-Pinto, Ivan; Wilson, Alexander D. M.; Kurvers, Ralf H. J. M.; Viblanc, Paul E.; Finger, Jean S.; Steffensen, John F.
2016-01-01
ABSTRACT Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s−1 but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s−1), followed by barracuda (6.2±1.0 m s−1), little tunny (5.6±0.2 m s−1) and dorado (4.0±0.9 m s−1); although size-corrected performance was highest in little tunny and lowest in sailfish. Contrary to previously reported estimates, our results suggest that sailfish are incapable of exceeding swimming speeds of 10-15 m s−1, which corresponds to the speed at which cavitation is predicted to occur, with destructive consequences for fin tissues. PMID:27543056
Larger Neural Responses Produce BOLD Signals That Begin Earlier in Time
Serena eThompson
2014-06-01
Full Text Available Functional MRI analyses commonly rely on the assumption that the temporal dynamics of hemodynamic response functions (HRFs are independent of the amplitude of the neural signals that give rise to them. The validity of this assumption is particularly important for techniques that use fMRI to resolve sub-second timing distinctions between responses, in order to make inferences about the ordering of neural processes. Whether or not the detailed shape of the HRF is independent of neural response amplitude remains an open question, however. We performed experiments in which we measured responses in primary visual cortex (V1 to large, contrast-reversing checkerboards at a range of contrast levels, which should produce varying amounts of neural activity. Ten subjects (ages 22-52 were studied in each of two experiments using 3 Tesla scanners. We used rapid, 250 msec, temporal sampling (repetition time, or TR and both short and long inter-stimulus interval (ISI stimulus presentations. We tested for a systematic relationship between the onset of the HRF and its amplitude across conditions, and found a strong negative correlation between the two measures when stimuli were separated in time (long- and medium-ISI experiments, but not the short-ISI experiment. Thus, stimuli that produce larger neural responses, as indexed by HRF amplitude, also produced HRFs with shorter onsets. The relationship between amplitude and latency was strongest in voxels with lowest mean-normalized variance (i.e., parenchymal voxels. The onset differences observed in the longer-ISI experiments are likely attributable to mechanisms of neurovascular coupling, since they are substantially larger than reported differences in the onset of action potentials in V1 as a function of response amplitude.
Esposito, Rosario; Mensitieri, Giuseppe; de Nicola, Sergio
2015-12-21
A new algorithm based on the Maximum Entropy Method (MEM) is proposed for recovering both the lifetime distribution and the zero-time shift from time-resolved fluorescence decay intensities. The developed algorithm allows the analysis of complex time decays through an iterative scheme based on entropy maximization and the Brent method to determine the minimum of the reduced chi-squared value as a function of the zero-time shift. The accuracy of this algorithm has been assessed through comparisons with simulated fluorescence decays both of multi-exponential and broad lifetime distributions for different values of the zero-time shift. The method is capable of recovering the zero-time shift with an accuracy greater than 0.2% over a time range of 2000 ps. The center and the width of the lifetime distributions are retrieved with relative discrepancies that are lower than 0.1% and 1% for the multi-exponential and continuous lifetime distributions, respectively. The MEM algorithm is experimentally validated by applying the method to fluorescence measurements of the time decays of the flavin adenine dinucleotide (FAD).
Gian Paolo Beretta
2008-08-01
Full Text Available A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Beretta, Gian P.
2008-09-01
A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Reduction of time for producing and acclimatizing two bamboo species in a greenhouse
Giovanni Aquino Gasparetto
2013-03-01
Full Text Available China has been investing in bamboo cultivation in Brazilian lands. However, there’s a significant deficit of seedling production for civil construction and the charcoal and cellulose sectors, something which compromises a part of the forestry sector. In order to contribute so that the bamboo production chain solves this problem, this study aimed to check whether the application of indole acetic acid (IAA could promote plant growth in a shorter cultivation time. In the study, Bambusa vulgaris and B. vulgaris var. vitatta stakes underwent two treatments (0.25% and 5.0% of IAA and they were grown on washed sand in a greenhouse. Number of leaves, stem growth, rooting, and chlorophyll content were investigated. There was no difference with regard to stem growth, root length, and number of leaves for both species in the two treatments (0.25% and 5% IAA. The chlorophyll content variation between the two species may constitute a quality parameter of forest seedling when compared to other bamboo species. After 43 days, the seedlings are ready for planting in areas of full sun. For the species studied here, the average time to the seedling sale is from 4 to 6 months, with no addition of auxin. Using this simple and low cost technique, several nurserymen will produce bamboo seedlings with reduced time, costs, and manpower.
Real-time measurement of materials properties at high temperatures by laser produced plasmas
Kim, Yong W.
1990-01-01
Determination of elemental composition and thermophysical properties of materials at high temperatures, as visualized in the context of containerless materials processing in a microgravity environment, presents a variety of unusual requirements owing to the thermal hazards and interferences from electromagnetic control fields. In addition, such information is intended for process control applications and thus the measurements must be real time in nature. A new technique is described which was developed for real time, in-situ determination of the elemental composition of molten metallic alloys such as specialty steel. The technique is based on time-resolved spectroscopy of a laser produced plasma (LPP) plume resulting from the interaction of a giant laser pulse with a material target. The sensitivity and precision were demonstrated to be comparable to, or better than, the conventional methods of analysis which are applicable only to post-mortem specimens sampled from a molten metal pool. The LPP technique can be applied widely to other materials composition analysis applications. The LPP technique is extremely information rich and therefore provides opportunities for extracting other physical properties in addition to the materials composition. The case in point is that it is possible to determine thermophysical properties of the target materials at high temperatures by monitoring generation and transport of acoustic pulses as well as a number of other fluid-dynamic processes triggered by the LPP event. By manipulation of the scaling properties of the laser-matter interaction, many different kinds of flow events, ranging from shock waves to surface waves to flow induced instabilities, can be generated in a controllable manner. Time-resolved detection of these events can lead to such thermophysical quantities as volume and shear viscosities, thermal conductivity, specific heat, mass density, and others.
Yaqin eQiao
2015-10-01
Full Text Available For the large-scale cultivation of microalgae for biodiesel production, one of the key problems is the determination of the optimum time for algal harvest when algae cells are saturated with neutral lipids. In this study, a method to determine the optimum harvest time in oil-producing microalgal cultivations by measuring the maximum photochemical efficiency of photosystem II (PSII, also called Fv/Fm, was established. When oil-producing Chlorella strains were cultivated and then treated with nitrogen starvation, it not only stimulated neutral lipid accumulation, but also affected the photosynthesis system, with the neutral lipid contents in all four algae strains – Chlorella sorokiniana C1, Chlorella sp. C2, C. sorokiniana C3, C. sorokiniana C7 – correlating negatively with the Fv/Fm values. Thus, for the given oil-producing algae, in which a significant relationship between the neutral lipid content and Fv/Fm value under nutrient stress can be established, the optimum harvest time can be determined by measuring the value of Fv/Fm. It is hoped that this method can provide an efficient way to determine the harvest time rapidly and expediently in large-scale oil-producing microalgae cultivations for biodiesel production.
Yamamoto Y
2013-10-01
Full Text Available Background: The objective of this study is to provide certain data on clinical outcomes and their predictors of traditional maximum androgen blockade (MAB in prostate cancer with bone metastasis. Methods: Subjects were patients with prostate adenocarcinoma with bone metastasis initiated to treat with MAB as a primary treatment without any local therapy at our hospital between January 2003 and December 2010. Time to prostate specific antigen (PSA progression, overall survival (OS time, and association of clinical factors and outcomes were retrospectively evaluated. Results: A total of 57 patients were evaluable. The median age was 70 years. The median primary PSA was 203 ng/ml. Luteinizing hormone-releasing hormone agonists had been administered in 96.5% of the patients. Bicalutamide had been chosen in 89.4 % of the patients as the initial antiandrogen. The median time to PSA progression with MAB was 11.3 months (95% confidence interval [CI], 10.4 to 13.0. The median OS was 47.3 months (95% CI, 30.7 to 81.0. Gleason score 9 or greater, decline of PSA level equal to or higher than 1.0 ng/ml with MAB, and time to PSA nadir equal to or shorter than six months after initiation of MAB were independent risk factors for time to PSA progression (P=0.010, P=0.005, and P=0.001; respectively. Time to PSA nadir longer than six months was the only independent predictor for longer OS (HR, 0.255 [95% CI, 0.109 to 0.597]; P=0.002. Conclusions: Initial time to PSA nadir should be emphasized for clinical outcome analyses in future studies on prostate cancer with bone metastasis.
Anderson, D. M.; Snowden, D. P.; Bochenek, R.; Bickel, A.
2015-12-01
In the U.S. coastal waters, a network of eleven regional coastal ocean observing systems support real-time coastal and ocean observing. The platforms supported and variables acquired are diverse, ranging from current sensing high frequency (HF) radar to autonomous gliders. The system incorporates data produced by other networks and experimental systems, further increasing the breadth of the collection. Strategies promoted by the U.S. Integrated Ocean Observing System (IOOS) ensure these data are not lost at sea. Every data set deserves a description. ISO and FGDC compliant metadata enables catalog interoperability and record-sharing. Extensive use of netCDF with the Climate and Forecast convention (identifying both metadata and a structured format) is shown to be a powerful strategy to promote discovery, interoperability, and re-use of the data. To integrate specialized data which are often obscure, quality control protocols are being developed to homogenize the QC and make these data more integrate-able. Data Assembly Centers have been established to integrate some specialized streams including gliders, animal telemetry, and HF radar. Subsets of data that are ingested into the National Data Buoy Center are also routed to the Global Telecommunications System (GTS) of the World Meteorological Organization to assure wide international distribution. From the GTS, data are assimilated into now-cast and forecast models, fed to other observing systems, and used to support observation-based decision making such as forecasts, warnings, and alerts. For a few years apps were a popular way to deliver these real-time data streams to phones and tablets. Responsive and adaptive web sites are an emerging flexible strategy to provide access to the regional coastal ocean observations.
Chiba Shigeru
2007-09-01
Full Text Available Abstract Background Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. Methods An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index ρmax, which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. Results The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in ρmax with time. Conclusion The physiological index, ρmax, will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.
Svendsen, Morten Bo Søndergaard; Domenici, Paolo; Marras, Stefano
2016-01-01
Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s(-1) but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish...
Daming Li; Fanxiang Kong; Xiaoli Shi; Linlin Ye; Yang Yu; Zhen Yang
2012-01-01
Lake Taihu,a large,shallow hypertrophic freshwater lake in eastern China,has experienced lake-wide toxic cyanobacterial blooms annually during summer season in the past decades.Spatial changes in the abundance of hepatotoxin microcystin-producing and nonmicrocystin producing Microcystis populations were investigated in the lake in August of 2009 and 2010.To monitor the densities of the total Microcystis population and the potential microcystin-producing subpopulation,we used a quantitative real-time PCR assay targeting the phycocyanin intergenic spacer (PC-IGS) and the microcystin synthetase gene (mcyD),respectively.On the basis of quantification by real-time PCR analysis,the abundance of potential toxic Microcystis genotypes and the ratio of the mcyD subpopulation to the total Microcystis varied significantly,from 4.08×104 to 5.22×107 copies/mL,from 5.7% to 65.8%,respectively.Correlation analysis showed a strong positive relationship between chlorophyll-a,toxic Microcystis and total Microcystis; the abundance of toxic Microcystis correlated positively with total phosphorus and ortho-phosphate concentrations,but negatively with TN:TP ratio and nitrate concentrations.Meanwhile the proportion of potential toxic genotypes within Microcystis population showed positive correlation with total phosphorus and ortho-phosphate concentrations.Our data suggest that increased phosphorus loading may be a significant factor promoting the occurrence of toxic Microcystis bloom in Lake Taihu.
Bockmann, Michelle R.; Harris, Abbe V.; Bennett, Corinna N.; Ruba Odeh; Hughes, Toby E.; Townsend, Grant C
2011-01-01
Findings are presented from a prospective cohort study of timing of primary tooth emergence and timing of oral colonization of Streptococcus mutans (S. mutans) in Australian twins. The paper focuses on differences in colonization timing in genetically identical monozygotic (MZ) twins. Timing of tooth emergence was based on parental report. Colonization timing of S. mutans were established by plating samples of plaque and saliva on selective media at 3 monthly intervals and assessing colony mo...
High emergence of ESBL-producing E. coli cystitis: Time to get smarter in Cyprus
Leon eCantas
2016-01-01
Full Text Available Background: Widespread prevalence of extended-spectrum βeta-lactamase producing Escherichia coli (ESBL-producing E. coli limits the infection therapeutic options and is a growing global health problem. In this study our aim was to investigate the antimicrobial resistance profile of the E. coli in hospitalized and out- patients in Cyprus. Results: During the period 2010-2014, 389 strains of E. coli were isolated from urine samples of hospitalized and out-patients in Cyprus. ESBL-producing E. coli, was observed in 53% of hospitalized and 44% in out-patients, latest one being in 2014. All ESBL-producing E. coli remained susceptible to amikacin, carbapenems except ertapenem (in-patients= 6%, out-patients= 11%. Conclusions: High emerging ESBL-producing E. coli from urine samples in hospitalized and out-patients is an extremely worrisome sign of development of untreatable infections in the near future on the island. We therefore emphasize the immediate need for establishment of optimal therapy guidelines based on the country specific surveillance programs. The need for urgent prescription habit changes and ban of over-the-counter sale of antimicrobials at each segment of healthcare services is also discussed in this research.
High Emergence of ESBL-Producing E. coli Cystitis: Time to Get Smarter in Cyprus.
Cantas, Leon; Suer, Kaya; Guler, Emrah; Imir, Turgut
2015-01-01
Widespread prevalence of extended-spectrum βeta-lactamase producing Escherichia coli (ESBL-producing E. coli) limits the infection therapeutic options and is a growing global health problem. In this study our aim was to investigate the antimicrobial resistance profile of the E. coli in hospitalized and out-patients in Cyprus. During the period 2010-2014, 389 strains of E. coli were isolated from urine samples of hospitalized and out-patients in Cyprus. ESBL-producing E. coli, was observed in 53% of hospitalized and 44% in out-patients, latest one being in 2014. All ESBL-producing E. coli remained susceptible to amikacin, carbapenems except ertapenem (in-patients = 6%, out-patients = 11%). High emerging ESBL-producing E. coli from urine samples in hospitalized and out-patients is an extremely worrisome sign of development of untreatable infections in the near future on the island. We therefore emphasize the immediate need for establishment of optimal therapy guidelines based on the country specific surveillance programs. The need for new treatment strategies, urgent prescription habit changes and ban of over-the-counter sale of antimicrobials at each segment of healthcare services is also discussed in this research.
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2003-01-01
Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)
Spruce, Joseph P.; Hargrove, William; Gasser, Gerald; Smoot, James; Kuper, Philip D.
2012-01-01
This presentation reviews the development, integration, and testing of Near Real Time (NRT) MODIS forest % maximum NDVI change products resident to the USDA Forest Service (USFS) ForWarn System. ForWarn is an Early Warning System (EWS) tool for detection and tracking of regionally evident forest change, which includes the U.S. Forest Change Assessment Viewer (FCAV) (a publically available on-line geospatial data viewer for visualizing and assessing the context of this apparent forest change). NASA Stennis Space Center (SSC) is working collaboratively with the USFS, ORNL, and USGS to contribute MODIS forest change products to ForWarn. These change products compare current NDVI derived from expedited eMODIS data, to historical NDVI products derived from MODIS MOD13 data. A new suite of forest change products are computed every 8 days and posted to the ForWarn system; this includes three different forest change products computed using three different historical baselines: 1) previous year; 2) previous three years; and 3) all previous years in the MODIS record going back to 2000. The change product inputs are maximum value NDVI that are composited across a 24 day interval and refreshed every 8 days so that resulting images for the conterminous U.S. are predominantly cloud-free yet still retain temporally relevant fresh information on changes in forest canopy greenness. These forest change products are computed at the native nominal resolution of the input reflectance bands at 231.66 meters, which equates to approx 5.4 hectares or 13.3 acres per pixel. The Time Series Product Tool, a MATLAB-based software package developed at NASA SSC, is used to temporally process, fuse, reduce noise, interpolate data voids, and re-aggregate the historical NDVI into 24 day composites, and then custom MATLAB scripts are used to temporally process the eMODIS NDVIs so that they are in synch with the historical NDVI products. Prior to posting, an in-house snow mask classification product
The times they are a-changin': carbapenems for extended-spectrum-β-lactamase-producing bacteria.
Rodríguez-Baño, Jesús
2015-09-01
Several antimicrobial agents are being investigated as alternatives to carbapenems in the treatment of infections caused by ESBL-producing Enterobacteriaceae, which may be useful in avoiding overuse of carbapenems in the context of recent global spread of carbapenem-resistant Enterobacteriaceae. The most promising candidates for invasive infections so far are β-lactam/β-lactamase inhibitor combinations and cephamycins.
Shiga toxin (Stx) producing E. coli (STEC) are a major family of foodborne pathogens of immense public health, zoonotic and economic significance in the US and worldwide. To date, there are no published reports on use of recombinase polymerase amplification (RPA) for STEC detection. The primary goal...
Aranha Junior, Ayrton Alves; Arend, Lavinia Nery; Ribeiro, Vanessa; Zavascki, Alexandre Prehn; Tuon, Felipe Francisco
2015-01-01
This study evaluated the efficacy of tigecycline (TIG), polymyxin B (PMB), and meropenem (MER) in 80 rats challenged with Klebsiella pneumoniae carbapenemase (KPC)-producing K. pneumoniae infection. A time-kill assay was performed with the same strain. Triple therapy and PMB+TIG were synergistic, promoted 100% survival, and produced negative peritoneal cultures, while MER+TIG showed lower survival and higher culture positivity than other regimens (P = 0.018) and was antagonistic. In vivo and in vitro studies showed that combined regimens, except MER+TIG, were more effective than monotherapies for this KPC-producing strain. PMID:25896686
Burn-up and Operation Time of Fuel Elements Produced in IPEN
Tondin, Julio Benedito Marin; Filho, Tufic Madi
2011-08-01
The aim of this paper is to present the developed work along the operational and reliability tests of fuel elements produced in the Institute of Energetic and Nuclear Research, IPEN-CNEN/SP, from the 1980's. The study analyzed the U-235 burn evolution and the element remain in the research reactor IEA-R1. The fuel elements are of the type MTR (Material Testing Reactor), the standard with 18 plates and a 12-plate control, with a nominal mean enrichment of 20%.
Elizabeth A Osterndorff-Kahanek
Full Text Available Repeated ethanol exposure and withdrawal in mice increases voluntary drinking and represents an animal model of physical dependence. We examined time- and brain region-dependent changes in gene coexpression networks in amygdala (AMY, nucleus accumbens (NAC, prefrontal cortex (PFC, and liver after four weekly cycles of chronic intermittent ethanol (CIE vapor exposure in C57BL/6J mice. Microarrays were used to compare gene expression profiles at 0-, 8-, and 120-hours following the last ethanol exposure. Each brain region exhibited a large number of differentially expressed genes (2,000-3,000 at the 0- and 8-hour time points, but fewer changes were detected at the 120-hour time point (400-600. Within each region, there was little gene overlap across time (~20%. All brain regions were significantly enriched with differentially expressed immune-related genes at the 8-hour time point. Weighted gene correlation network analysis identified modules that were highly enriched with differentially expressed genes at the 0- and 8-hour time points with virtually no enrichment at 120 hours. Modules enriched for both ethanol-responsive and cell-specific genes were identified in each brain region. These results indicate that chronic alcohol exposure causes global 'rewiring' of coexpression systems involving glial and immune signaling as well as neuronal genes.
Using Student-Produced Time-Lapse Plant Movies to Communicate Concepts in Plant Biology
Marcia Harrison-Pitaniello
2013-01-01
Full Text Available Why do students think plants are “boring”? One factor may be that they do not see plant movement in real (i.e., their time. This attitude may negatively impact their understanding of plant biology. Time-lapse movies of plants allow students to see the sophistication of movements involved in both organ development and orientation. The objective of this project was to develop simple methods to capture image sequences for lab analysis and for converting into movies. The technology for making time-lapse movies is now easily attainable and fairly inexpensive, allowing its use for skill levels from grade school through college undergraduates. Presented are example time-lapse movie exercises from both an undergraduate plant physiology course and outreach activities. The time-lapse plant exercises are adaptable to explore numerous topics that incorporate science standards core concepts, competencies, and disciplinary practices as well as to integrate higher order thinking skills and build skills in hypothesis development and communicating results to various audiences.
Brasted, P J; Döbrössy, M D; Robbins, T W; Dunnett, S B
1998-08-01
The dorsal striatum plays a crucial role in mediating voluntary movement. Excitotoxic striatal lesions in rats have previously been shown to impair the initiation but not the execution of movement in a choice reaction time task in an automated lateralised nose-poke apparatus (the "nine-hole box"). Conversely, when a conceptually similar reaction time task has been applied in a conventional operant chamber (or "Skinner box"), striatal lesions have been seen to impair the execution rather than the initiation of the lateralised movement. The present study was undertaken to compare directly these two results by training the same group of rats to perform a choice reaction time task in the two chambers and then comparing the effects of a unilateral excitotoxic striatal lesion in both chambers in parallel. Particular attention was paid to adopting similar parameters and contingencies in the control of the task in the two test chambers. After striatal lesions, the rats showed predominantly contralateral impairments in both tasks. However, they showed a deficit in reaction time in the nine-hole box but an apparent deficit in response execution in the Skinner box. This finding confirms the previous studies and indicates that differences in outcome are not simply attributable to procedural differences in the lesions, training conditions or tasks parameters. Rather, the pattern of reaction time deficit after striatal lesions depends critically on the apparatus used and the precise response requirements for each task.
Borhan, M. Z.; Ahmad, R.; Rusop, M.; Abdullah, S.
2012-11-01
Centella Asiatica (C. Asiatica)contains asiaticoside as bioactive constituent which can be potentially used in skin healing process. Unfortunately, the normal powders are difficult to be absorbed by the body effectively. In order to improve the value of use, nano C. Asiatica powder was prepared. The influence of milling time was carried out at 0.5, 2, 4, 6, 8 hours and 10 hours. The effect of ball milling at different times was characterized using particles size analysis and FTIR Spectroscopy. The fineness of ground product was evaluated by recording the z-Average (nm), undersize distribution and polydispersity index (PdI). The results show that the smallest size particles by mean is 233 nm while FTIR spectra shows that there is no changing in the major component in the C. Asiatica powders with milling time.
Since it was first described in the mid-1990s, quantitative real time PCR (Q-PCR) has been widely used in many fields of biomedical research and molecular diagnostics. This method is routinely used to validate whole transcriptome analyses such as DNA microarrays, suppressive subtractive hybridizati...
Producing near-real-time intelligence: predicting the world of tomorrow
Barros, A.I.; Broek, A.C. van den; Dalen, J.A. van; Vecht, B. van der; Wevers, J.
2014-01-01
The complexity and dynamics of current military operations demand reliable and up-to-date intelligence and in particular near-real-time threat assessment. This paper explores the potential of operational analysis techniques in supporting military personnel in processing information from different so
Review of current GPS methodologies for producing accurate time series and their error sources
He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping
2017-05-01
The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e
Konishi, Kazuyuki; Yonai, Miharu; Kaneyama, Kanako; Ito, Satoshi; Matsuda, Hideo; Yoshioka, Hajime; Nagai, Takashi; Imai, Kei
2011-10-01
The reproductive ability, milk-producing capacity, survival time and relationships of these parameters with telomere length were investigated in 4 groups of cows produced by somatic cell nuclear transfer (SCNT). Each group was produced using the same donor cells (6 Holstein (1H), 3 Holstein (2H), 4 Jersey (1J) and 5 Japanese Black (1B) cows). As controls, 47 Holstein cows produced by artificial insemination were used. The SCNT cows were artificially inseminated, and multiple deliveries were performed after successive rounds of breeding and conception. No correlation was observed between the telomere length and survival time in the SCNT cows. Causes of death of SCNT cows included accidents, accident-associated infections, inappropriate management, acute mastitis and hypocalcemia. The lifetime productivity of SCNT cows was superior to those of the controls and cell donor cows. All SCNT beef cows with a relatively light burden of lactation remained alive and showed significantly prolonged survival time compared with the cows in the SCNT dairy breeds. These results suggest that the lifetime productivity of SCNT cows was favorable, and their survival time was more strongly influenced by environmental burdens, such as pregnancy, delivery, lactation and feeding management, than by the telomere length.
Harri, Liliya
2009-10-01
The digital thermal technology of producing flexographic printing plates from photopolymer plates is one of the newest technologies. This technology allows to develop flexographic plates without the use of any solvent. The process of producing flexographic printing plates by the digital thermal method consists of several main stages: back exposure, laser exposure, main exposure, thermal development, post exposure, and light finishing. The studies carried out with the use of optical stereoscopic microscopy allowed to determine the effect of time of main exposure to ultraviolet radiation on the dot area, diameter, and edge factor of halftone dots reproduced on flexographic printing plate produced by the digital thermal method, as well as on the quality of reproducing the surface and on the profiles of free-standing printing microelements. The results of the microscopic studies performed have allowed to define the criteria of establishing optimum time of main exposure of photopolymer plates used in the digital thermal technology of producing flexographic printing plates. A precise definition of the criteria for determining the optimum time of main exposure will enable to reduce the time-consuming control tests and to eliminate errors in both the process of manufacturing flexographic printing plates and in the printing process carried out with the use of such plates.
Santillan, Arturo Orozco
2013-01-01
Results of numerical simulations of the sound field produced by a circular piston in a rigid baffled are presented. The aim was to calculate the acoustic streaming and the flow of mass generated by the sound field. For this purpose, the classical finite-difference time-domain method was implemented...
Tempo máximo de fonação de crianças pré-escolares Maximum phonation time in pre-school children
Carla Aparecida Cielo
2008-08-01
Full Text Available Pesquisas sobre o tempo máximo de fonação (TMF em crianças obtiveram diferentes resultados, constatando que tal medida pode refletir o controle neuromuscular e aerodinâmico da produção vocal, podendo ser utilizada como indicador para outras formas de avaliação, tanto qualitativas quanto objetivas. OBJETIVO: Verificar as medidas de TMF de 23 crianças pré-escolares, com idades entre quatro e seis anos e oito meses. MATERIAL E MÉTODO: O processo de amostragem contou com questionário enviado aos pais, triagem auditiva e avaliação perceptivo-auditiva vocal, por meio da escala RASAT. A coleta de dados constou dos TMF. DESENHO DO ESTUDO: Prospectivo de corte transversal. RESULTADOS: Os TMF /a/, /s/ e /z/ médios foram 7,42s, 6,35s e 7,19s; os TMF /a/ aos seis anos, foram significativamente maiores do que aos quatro anos; à medida que a idade aumentou, todos os TMF também aumentaram; e a relação s/z para todas as idades foi próxima de um. CONCLUSÕES: Os valores de TMF mostraram-se superiores aos verificados em pesquisas nacionais e inferiores aos verificados em pesquisa internacionais. Além disso, pode-se concluir que as faixas etárias analisadas no presente estudo encontram-se num período de maturação nervosa e muscular, sendo a imaturidade mais evidente na faixa etária dos quatro anos.Past studies on the maximum phonation time (MPT in children have shown different results in duration. This factor may reflect the neuromuscular and aerodynamic control of phonation in patients; such control might be used as an indicator of other evaluation methods on a qualitative and quantitative basis. AIM: to verify measures of MPT and voice acoustic characteristics in 23 children aged four to six year and eight months. METHOD: The sampling process comprised a questionnaire that was sent to parents, followed by auditory screening and a voice perceptive-auditory assessment based on the R.A.S.A.T. scale. Data collection included the MPT. STUDY
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Use of Real Time Satellite Infrared and Ocean Color to Produce Ocean Products
Roffer, M. A.; Muller-Karger, F. E.; Westhaver, D.; Gawlikowski, G.; Upton, M.; Hall, C.
2014-12-01
Real-time data products derived from infrared and ocean color satellites are useful for several types of users around the world. Highly relevant applications include recreational and commercial fisheries, commercial towing vessel and other maritime and navigation operations, and other scientific and applied marine research. Uses of the data include developing sampling strategies for research programs, tracking of water masses and ocean fronts, optimizing ship routes, evaluating water quality conditions (coastal, estuarine, oceanic), and developing fisheries and essential fish habitat indices. Important considerations for users are data access and delivery mechanisms, and data formats. At this time, the data are being generated in formats increasingly available on mobile computing platforms, and are delivered through popular interfaces including social media (Facebook, Linkedin, Twitter and others), Google Earth and other online Geographical Information Systems, or are simply distributed via subscription by email. We review 30 years of applications and describe how we develop customized products and delivery mechanisms working directly with users. We review benefits and issues of access to government databases (NOAA, NASA, ESA), standard data products, and the conversion to tailored products for our users. We discuss advantages of different product formats and of the platforms used to display and to manipulate the data.
Mangun, G R; Buck, L A
1998-03-01
This study investigated the simple reaction time (RT) and event-related potential (ERP) correlates of biasing attention towards a location in the visual field. RTs and ERPs were recorded to stimuli flashed randomly and with equal probability to the left and right visual hemifields in the three blocked, covert attention conditions: (i) attention divided equally to left and right hemifield locations; (ii) attention biased towards the left location; or (iii) attention biased towards the right location. Attention was biased towards left or right by instructions to the subjects, and responses were required to all stimuli. Relative to the divided attention condition, RTs were significantly faster for targets occurring where more attention was allocated (benefits), and slower to targets where less attention was allocated (costs). The early P1 (100-140 msec) component over the lateral occipital scalp regions showed attentional benefits. There were no amplitude modulations of the occipital N1 (125-180 msec) component with attention. Between 200 and 500 msec latency, a late positive deflection (LPD) showed both attentional costs and benefits. The behavioral findings show that when sufficiently induced to bias attention, human observers demonstrate RT benefits as well as costs. The corresponding P1 benefits suggest that the RT benefits of spatial attention may arise as the result of modulations of visual information processing in the extrastriate visual cortex.
Factors affecting magnitude and time course of neuromuscular block produced by suxamethonium.
Vanlinthout, L E; van Egmond, J; de Boo, T; Lerou, J G; Wevers, R A; Booij, L H
1992-07-01
This study was designed to identify factors that significantly alter the magnitude and duration of suxamethonium-induced neuromuscular block in patients with an apparently normal genotype for pseudocholinesterase. One hundred and fifty-six adults (ages 18-65 yr) were allocated to 13 subgroups. Patients in each subgroup received suxamethonium 50-2000 micrograms kg-1. The mechanographic response of the adductor pollicis brevis muscle to ulnar nerve stimulation was recorded. The ED50 was found to be 167 micrograms kg-1, ED90 was 316 micrograms kg-1 and ED95 was 392 micrograms kg-1. The duration of action (delta t) was in agreement with earlier published results. The magnitude of block was dose-related and decreased with increasing onset time (ton) and pseudocholinesterase activity (PChA). Neither age nor gender affected the degree of suxamethonium-induced block. Delta t was dose-related, decreased with increasing PChA, and was shorter for women. Age and ton had no effect on delta t.
The Influence of Variation in Time and HCl Concentration to the Glucose Produced from Kepok Banana
Widodo M, Rohman; Noviyanto, Denny; RM, Faisal
2016-01-01
Kepok banana (Musa paradisiaca) is a plant that has many advantagesfrom its fruit, stems, leaves, flowers and cob. However, we just tend to take benefit from the fruit. We grow and harvest the fruit without taking advantages from other parts. So they would be a waste or detrimental to animal nest if not used. The idea to take the benefit from the banana crop yields, especially cob is rarely explored. This study is an introduction to the use of banana weevil especially from the glucose it contains. This study uses current methods of hydrolysis using HCl as a catalyst with the concentration variation of 0.4 N, 0.6 N and 0.8 N and hydrolysis times variation of 20 minutes, 25 minutes and 30 minutes. The stages in the hydrolysis include preparation of materials, the process of hydrolysis and analysis of test results using Fehling and titrate with standard glucose solution. HCl is used as a catalyst because it is cheaper than the enzyme that has the same function. NaOH 60% is used for neutralizing the pH of the filtrate result of hydrolysis. From the results of analysis, known thatthe biggest yield of glucose is at concentration 0.8 N and at 30 minutes reaction, it contains 6.25 gram glucose / 20 gram dry sampel, and the convertion is 27.22% at 20 gram dry sampel.
Oviaño, M; Fernández, B; Fernández, A; Barba, M J; Mouriño, C; Bou, G
2014-11-01
Bacteria that produce extended-spectrum β-lactamases (ESBLs) are an increasing healthcare problem and their rapid detection is a challenge that must be overcome in order to optimize antimicrobial treatment and patient care. Matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry (MS) has been used to determine resistance to β-lactams, including carbapenems in Enterobacteriaceae, but the methodology has not been fully validated as it remains time-consuming. We aimed to assess whether MALDI-TOF can be used to detect ESBL-producing Enterobacteriaceae from positive blood culture bottles in clinical practice. In the assay, 141 blood cultures were tested, 13 of them were real bacteraemias and 128 corresponded to blood culture bottles seeded with bacterial clinical isolates. Bacteraemias were analysed by MALDI-TOF after a positive growth result and the 128 remaining blood cultures 24 h after the bacterial seeding. β-lactamase activity was determined through the profile of peaks associated with the antibiotics cefotaxime and ceftazidime and its hydrolyzed forms. Clavulanic acid was added to rule out the presence of non-ESBL mechanisms. Overall data show a 99% (103 out of 104) sensitivity in detecting ESBL in positive blood cultures. Data were obtained in 90 min (maximum 150 min). The proposed methodology has a great impact on the early detection of ESBL-producing Enterobacteriaceae from positive blood cultures, being a rapid and efficient method and allowing early administration of an appropriate antibiotic therapy.
Price-Maker Wind Power Producer Participating in a Joint Day-Ahead and Real-Time Market
Delikaraoglou, Stefanos; Papakonstantinou, Athanasios; Ordoudis, Christos;
2015-01-01
-stage stochastic problem, co-optimizing day-ahead and real-time dispatch. In this framework, we introduce a bilevel model to derive the optimal bid of a strategic wind power producer acting as price-maker both in day-ahead and real-time stages. The proposed model is a Mathematical Program with Equilibrium...... Constraints (MPEC) that is reformulated as a single-level Mixed-Integer Linear Program (MILP), which can be readily solved. Our analysis shows that adopting strategic behaviour may improve producer’s expected profit as the share of wind power increases. However, this incentive diminishes in power systems...
Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; Smith, Emily A; Vaswani, Namrata; Petrich, Jacob W
2016-03-10
The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as "residual minimization" (RM) and "maximum likelihood" (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number of "photon counts" was approximately 20, 200, 1000, 3000, and 6000 and there were about 2-200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM methods (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson's weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. The robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.
Fei Lin
2016-03-01
Full Text Available With its large capacity, the total urban rail transit energy consumption is very high; thus, energy saving operations are quite meaningful. The effective use of regenerative braking energy is the mainstream method for improving the efficiency of energy saving. This paper examines the optimization of train dwell time and builds a multiple train operation model for energy conservation of a power supply system. By changing the dwell time, the braking energy can be absorbed and utilized by other traction trains as efficiently as possible. The application of genetic algorithms is proposed for the optimization, based on the current schedule. Next, to validate the correctness and effectiveness of the optimization, a real case is studied. Actual data from the Beijing subway Yizhuang Line are employed to perform the simulation, and the results indicate that the optimization method of the dwell time is effective.
Time-Resolved Optical Emission Spectroscopy Diagnosis of CO2 Laser-Produced SnO2 Plasma
Lan, Hui; Wang, Xinbing; Zuo, Duluo
2016-09-01
The spectral emission and plasma parameters of SnO2 plasmas have been investigated. A planar ceramic SnO2 target was irradiated by a CO2 laser with a full width at half maximum of 80 ns. The temporal behavior of the specific emission lines from the SnO2 plasma was characterized. The intensities of Sn I and Sn II lines first increased, and then decreased with the delay time. The results also showed a faster decay of Sn I atoms than that of Sn II ionic species. The temporal evolutions of the SnO2 plasma parameters (electron temperature and density) were deduced. The measured temperature and density of SnO2 plasma are 4.38 eV to 0.5 eV and 11.38×1017 cm-3 to 1.1×1017 cm-3, for delay times between 0.1 μs and 2.2 μs. We also investigated the effect of the laser pulse energy on SnO2 plasma. supported by National Natural Science Foundation of China (No. 11304235) and the Director Fund of WNLO
Hoeij, F.B. van; Stadhouders, P.H.G.M.; Weusten, B.L.A.M. [St Antonius Ziekenhuis, Department of Gastroenterology, Nieuwegein (Netherlands); Keijsers, R.G.M. [St Antonius Ziekenhuis, Department of Nuclear Medicine, Nieuwegein (Netherlands); Loffeld, B.C.A.J. [Zuwe Hofpoort Ziekenhuis, Department of Internal Medicine, Woerden (Netherlands); Dun, G. [Ziekenhuis Rivierenland, Department of Internal Medicine, Tiel (Netherlands)
2015-01-15
In patients undergoing {sup 18}F-FDG PET/CT, incidental colonic focal lesions can be indicative of inflammatory, premalignant or malignant lesions. The maximum standardized uptake value (SUV{sub max}) of these lesions, representing the FDG uptake intensity, might be helpful in differentiating malignant from benign lesions, and thereby be helpful in determining the urgency of colonoscopy. The aim of our study was to assess the incidence and underlying pathology of incidental PET-positive colonic lesions in a large cohort of patients, and to determine the usefulness of the SUV{sub max} in differentiating benign from malignant pathology. The electronic records of all patients who underwent FDG PET/CT from January 2010 to March 2013 in our hospital were retrospectively reviewed. The main indications for PET/CT were: characterization of an indeterminate mass on radiological imaging, suspicion or staging of malignancy, and suspicion of inflammation. In patients with incidental focal FDG uptake in the large bowel, data regarding subsequent colonoscopy were retrieved, if performed within 120 days. The final diagnosis was defined using colonoscopy findings, combined with additional histopathological assessment of the lesion, if applicable. Of 7,318 patients analysed, 359 (5 %) had 404 foci of unexpected colonic FDG uptake. In 242 of these 404 lesions (60 %), colonoscopy follow-up data were available. Final diagnoses were: adenocarcinoma in 25 (10 %), adenoma in 90 (37 %), and benign in 127 (53 %). The median [IQR] SUV{sub max} was significantly higher in adenocarcinoma (16.6 [12 - 20.8]) than in benign lesions (8.2 [5.9 - 10.1]; p < 0.0001), non-advanced adenoma (8.3 [6.1 - 10.5]; p < 0.0001) and advanced adenoma (9.7 [7.2 - 12.6]; p < 0.001). The receiver operating characteristic curve of SUV{sub max} for malignant versus nonmalignant lesions had an area under the curve of 0.868 (SD ± 0.038), the optimal cut-off value being 11.4 (sensitivity 80 %, specificity 82
The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map
Rosselot, Donald; Hall, Ernest L.
2005-10-01
This paper presents a novel, simple and fast algorithm to produce a "floor plan" obstacle map in real time using video. The XH-map algorithm is a transformation of stereo vision data in disparity map space into a two dimensional obstacle map space using a method that can be likened to a histogram reduction of image information. The classic floor-ground background noise problem is addressed with a simple one-time semi-automatic calibration method incorporated into the algorithm. This implementation of this algorithm utilizes the Intel Performance Primitives library and OpenCV libraries for extremely fast and efficient execution, creating a scaled obstacle map from a 480x640x256 stereo pair in 1.4 milliseconds. This algorithm has many applications in robotics and computer vision including enabling an "Intelligent Robot" robot to "see" for path planning and obstacle avoidance.
Auffret, Marc; Pilote, Alexandre; Proulx, Emilie; Proulx, Daniel; Vandenberg, Grant; Villemur, Richard
2011-12-15
Geosmin and 2-methylisoborneol (MIB) have been associated with off-flavour problems in fish and seafood products, generating a strong negative impact for aquaculture industries. Although most of the producers of geosmin and MIB have been identified as Streptomyces species or cyanobacteria, Streptomyces spp. are thought to be responsible for the synthesis of these compounds in indoor recirculating aquaculture systems (RAS). The detection of genes involved in the synthesis of geosmin and MIB can be a relevant indicator of the beginning of off-flavour events in RAS. Here, we report a real-time polymerase chain reaction (qPCR) protocol targeting geoA sequences that encode a germacradienol synthase involved in geosmin synthesis. New geoA-related sequences were retrieved from eleven geosmin-producing Actinomycete strains, among them two Streptomyces strains isolated from two RAS. Combined with geoA-related sequences available in gene databases, we designed primers and standards suitable for qPCR assays targeting mainly Streptomyces geoA. Using our qPCR protocol, we succeeded in measuring the level of geoA copies in sand filter and biofilters in two RAS. This study is the first to apply qPCR assays to detect and quantify the geosmin synthesis gene (geoA) in RAS. Quantification of geoA in RAS could permit the monitoring of the level of geosmin producers prior to the occurrence of geosmin production. This information will be most valuable for fish producers to manage further development of off-flavour events.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Francisco Estrada
Full Text Available In this paper evidence of anthropogenic influence over the warming of the 20th century is presented and the debate regarding the time-series properties of global temperatures is addressed in depth. The 20th century global temperature simulations produced for the Intergovernmental Panel on Climate Change's Fourth Assessment Report and a set of the radiative forcing series used to drive them are analyzed using modern econometric techniques. Results show that both temperatures and radiative forcing series share similar time-series properties and a common nonlinear secular movement. This long-term co-movement is characterized by the existence of time-ordered breaks in the slope of their trend functions. The evidence presented in this paper suggests that while natural forcing factors may help explain the warming of the first part of the century, anthropogenic forcing has been its main driver since the 1970's. In terms of Article 2 of the United Nations Framework Convention on Climate Change, significant anthropogenic interference with the climate system has already occurred and the current climate models are capable of accurately simulating the response of the climate system, even if it consists in a rapid or abrupt change, to changes in external forcing factors. This paper presents a new methodological approach for conducting time-series based attribution studies.
Rosangela Alves de Mendonça
2012-12-01
Full Text Available OBJETIVO: medir os limites do tempo máximo de fonação pré e pós-aplicação do Programa de Exercícios Funcionais Vocais de Stemple e Gerdeman em professoras, com e sem alteração vocal, que atuam no ensino fundamental do Município de Niterói-RJ. MÉTODO: participaram do estudo 17 professoras, que aceitaram participar espontaneamente da aplicação do programa de exercícios: vogal /i/ sustentada, glissando ascendente e descendente da palavra /nol/, e escala de tons musicais Dó,Ré,Mi,Fá,Sol, com emissão de /ol/, pelo tempo máximo de fonação. A medida do tempo foi coletada pré e pós-aplicação do programa por meio da vogal [ε], após a participante ter sido submetida a exame de videolaringoestroboscopia. RESULTADOS: verificou-se expressivo ganho do tempo máximo de fonação do pré para o pós-exercício e o valor do programa, que em sua aplicação, prioriza a execução dos exercícios com o maior tempo possível de fonação. CONCLUSÃO: o Programa de Exercícios Funcionais Vocais de Stemple e Gerdeman favoreceu o aumento do tempo máximo de fonação intrassujeito, possibilitando melhores condições de saúde vocal no desempenho profissional e social.PURPOSE: to measure the limits of the maximum phonation time pre and post application of the Stemple and Gerdeman Vocal Function Exercises Program in teachers of the Elementary School Education level in Niterói/Brazil, with or without voice alterations. METHOD: there were 17 female teachers who spontaneously agreed to participate. The exercise program that was applied consisted in the sustained vowel /i/, ascending and descending gliding on the word /nol/, and musical scale tones Do Re Mi Fa Sol - issuing the /ol/ for the maximum time of phonation. The measure of the maximum phonation time was counted pre and post exercise program through the vowel /ε/. RESULTS: the results revealed that the teachers presented an expressive increase in the maximum phonation and time
Time-of-Flight Measurement of a 355-nm Nd:YAG Laser-Produced Aluminum Plasma
M. F. Baclayon
2003-06-01
Full Text Available An aluminum target in air was irradiated by a 355-nm Nd:YAG laser with a pulse width of 10 ns and arepetition rate of 10 Hz. The emission spectra of the laser-produced aluminum plasma were investigatedwith varying distances from the target surface. The results show the presence of a strong continuum veryclose to the target surface, but as the plasma evolve in space, the continuum gradually disappears and theemitted spectra are dominated by stronger line emissions. The observed plasma species are the neutraland singly ionized aluminum and their speeds were investigated using an optical time-of-flight measurementtechnique. Results show that the speeds of the plasma species decreases gradually with distance from thetarget surface. Comparison of the computed speeds of the plasma species shows that the singly ionizedspecies have relatively greater kinetic energy than the neutral species.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Stimpert, Alison K; Wiley, David N; Au, Whitlow W L; Johnson, Mark P; Arsenault, Roland
2007-10-22
Humpback whales (Megaptera novaeangliae) exhibit a variety of foraging behaviours, but neither they nor any baleen whale are known to produce broadband clicks in association with feeding, as do many odontocetes. We recorded underwater behaviour of humpback whales in a northwest Atlantic feeding area using suction-cup attached, multi-sensor, acoustic tags (DTAGs). Here we describe the first recordings of click production associated with underwater lunges from baleen whales. Recordings of over 34000 'megapclicks' from two whales indicated relatively low received levels at the tag (between 143 and 154dB re 1 microPa pp), most energy below 2kHz, and interclick intervals often decreasing towards the end of click trains to form a buzz. All clicks were recorded during night-time hours. Sharp body rolls also occurred at the end of click bouts containing buzzes, suggesting feeding events. This acoustic behaviour seems to form part of a night-time feeding tactic for humpbacks and also expands the known acoustic repertoire of baleen whales in general.
Smith, Kirsty F; de Salas, Miguel; Adamson, Janet; Rhodes, Lesley L
2014-03-07
The identification of toxin-producing dinoflagellates for monitoring programmes and bio-compound discovery requires considerable taxonomic expertise. It can also be difficult to morphologically differentiate toxic and non-toxic species or strains. Various molecular methods have been used for dinoflagellate identification and detection, and this study describes the development of eight real-time polymerase chain reaction (PCR) assays targeting the large subunit ribosomal RNA (LSU rRNA) gene of species from the genera Gymnodinium, Karenia, Karlodinium, and Takayama. Assays proved to be highly specific and sensitive, and the assay for G. catenatum was further developed for quantification in response to a bloom in Manukau Harbour, New Zealand. The assay estimated cell densities from environmental samples as low as 0.07 cells per PCR reaction, which equated to three cells per litre. This assay not only enabled conclusive species identification but also detected the presence of cells below the limit of detection for light microscopy. This study demonstrates the usefulness of real-time PCR as a sensitive and rapid molecular technique for the detection and quantification of micro-algae from environmental samples.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Varas, M. I.; Orteu, E.; Laserna, J. A.
2014-07-01
This paper demonstrates the process followed in the preparation of the Manual of floods of Cofrentes NPP to identify the allowed maximum time available to the central in the isolation of a moderate or high energy pipe break, until it affects security (1E) participating in the safe stop of Reactor or in pools of spent fuel cooling-related equipment , and to determine the recommended isolation mode from the point of view of the location of the break or rupture, of the location of the 1E equipment and human factors. (Author)
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Rubin, Stephen P.; Reisenbichler, Reginald R.; Slatton, Stacey L.; Rubin, Stephen P.; Reisenbichler, Reginald R.; Wetzel, Lisa A.; Hayes, Michael C.
2012-01-01
The accuracy of a model that predicts time between fertilization and maximum alevin wet weight (MAWW) from incubation temperature was tested for steelhead Oncorhynchus mykiss from Dworshak National Fish Hatchery on the Clearwater River, Idaho. MAWW corresponds to the button-up fry stage of development. Embryos were incubated at warm (mean=11.6°C) or cold (mean=7.3°C) temperatures and time between fertilization and MAWW was measured for each temperature. Model predictions of time to MAWW were within 1% of measured time to MAWW. Mean egg weight ranged from 0.101-0.136 g among females (mean = 0.116). Time to MAWW was positively related to egg size for each temperature, but the increase in time to MAWW with increasing egg size was greater for embryos reared at the warm than at the cold temperature. We developed equations accounting for the effect of egg size on time to MAWW for each temperature, and also for the mean of those temperatures (9.3°C).
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Any Chance Left for China's Large Engineering Tire Producers?
无
2007-01-01
@@ The service the large engineering tire producers offer is to satisfy the lowest transport cost, moving the maximum volume of minerals at the lowest environmental costs and in the shortest period of time.
Piao, Daqing; Holyoak, G Reed; Patel, Sanjay
2016-01-01
We demonstrate a laparoscopic applicator probe and a method thereof for real-time en-face topographic mapping of near-surface heterogeneity for potential use in intraoperative margin assessment during minimally invasive oncological procedures. The probe fits in a 12mm port and houses at its maximum 128 copper-coated 750um fibers that form radially alternating illumination (70 fibers) and detection (58 fibers) channels. By simultaneously illuminating the 70 source channels of the probe that is in contact with a scattering medium and concurrently measuring the light diffusely propagated to the 58 detector channels, the presence of near-surface optical heterogeneities can be resolved in an en-face 9.5mm field-of-view in real-time. Visualization of a subsurface margin of strong attenuation contrast at a depth up to 3mm is demonstrated at one wavelength at a frame rate of 1.25Hz.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
刘圣波; 刘贺; 赵燕东
2013-01-01
为了提高光伏太阳能转换率，拓展传统纹波控制技术的应用，该文提出了离散时间纹波控制算法，通过对纹波控制技术的离散化处理，将最大功率点跟踪控制问题转换为离散采样-控制问题。以太阳能板输出电压为状态量，在其处于极大值和极小值时对系统进行采样；随后采取离散时间纹波控制算法使系统快速追踪到系统的最大功率点。该文在Simulink系统中对离散时间纹波控制算法进行了仿真。仿真结果表明，在1000和200 W/cm2，25℃的条件下，算法均可以快速准确地追踪到太阳能系统的最大功率点，追踪精度高达96%；在外部环境由1000变为200 W/cm2时，系统能够在0.1 s内准确地追踪到新的最大功率点。%Solar photovoltaic technology has been widely used in modern agriculture. Due to the volatility of solar power, it is hard to maximize the use of solar energy. In order to seek a way to improve the conversion rate of photovoltaic solar panels, this paper developed a new algorithm to utilize solar energy more efficiently. Since tracking solar maximum power point is a valid method to maintain the solar panel power output at a high level, at this paper, we choose ripple correlation control (RCC) to keep tracking the maximum power point of a solar photovoltaic (PV) system. Ripple correlation control is a real-time optimal method particularly suitable for power convertor control. The objective of RCC in solar PV system is to maximize the energy quantity. This paper extended the traditional analog RCC technique to the digital domain. With discretization and simplifications of math model, the RCC method can be transformed to a sampling problem. The control method shows that when the solar PV system reaches the maximum power point, power outputs at both maximum and minimum state should be nearly the same. Moreover, since voltage output of a system is easy to observe and directly related to power
Gaibani, Paolo; Galea, Anna; Fagioni, Marco; Ambretti, Simone; Sambri, Vittorio; Landini, Maria Paola
2016-10-01
We evaluated a real-time single-peak (11.109-Da) detection assay based on matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) for the identification of Klebsiella pneumoniae carbapenemase (KPC)-producing K. pneumoniae Our results demonstrated that the 11.109-Da peak was detected in 88.2% of the KPC producers. Analysis of blaKPC-producing K. pneumoniae showed that the gene encoding the 11.109-Da protein was commonly (97.8%) associated with the Tn4401a isoform.
Galea, Anna; Fagioni, Marco; Ambretti, Simone; Sambri, Vittorio; Landini, Maria Paola
2016-01-01
We evaluated a real-time single-peak (11.109-Da) detection assay based on matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) for the identification of Klebsiella pneumoniae carbapenemase (KPC)-producing K. pneumoniae. Our results demonstrated that the 11.109-Da peak was detected in 88.2% of the KPC producers. Analysis of blaKPC-producing K. pneumoniae showed that the gene encoding the 11.109-Da protein was commonly (97.8%) associated with the Tn4401a isoform. PMID:27413192
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Ohnaka, K.; Weigelt, G.; Hofmann, K.-H.
2017-01-01
Aims: Our recent visible polarimetric images of the well-studied AGB star W Hya taken at pre-maximum light (phase 0.92) with VLT/SPHERE-ZIMPOL have revealed clumpy dust clouds close to the star at 2 R⋆. We present second-epoch SPHERE-ZIMPOL observations of W Hya at minimum light (phase 0.54) as well as high-spectral resolution long-baseline interferometric observations with the AMBER instrument at the Very Large Telescope Interferometer (VLTI). Methods: We observed W Hya with VLT/SPHERE-ZIMPOL at three wavelengths in the continuum (645, 748, and 820 nm), in the Hα line at 656.3 nm, and in the TiO band at 717 nm. The VLTI/AMBER observations were carried out in the wavelength region of the CO first overtone lines near 2.3 μm with a spectral resolution of 12 000. Results: The high-spatial resolution polarimetric images obtained with SPHERE-ZIMPOL have allowed us to detect clear time variations in the clumpy dust clouds as close as 34-50 mas (1.4-2.0 R⋆) to the star. We detected the formation of a new dust cloud as well as the disappearance of one of the dust clouds detected at the first epoch. The Hα and TiO emission extends to 150 mas ( 6 R⋆), and the Hα images obtained at two epochs reveal time variations. The degree of linear polarization measured at minimum light, which ranges from 13 to 18%, is higher than that observed at pre-maximum light. The power-law-type limb-darkened disk fit to the AMBER data in the continuum results in a limb-darkened disk diameter of 49.1 ± 1.5 mas and a limb-darkening parameter of 1.16 ± 0.49, indicating that the atmosphere is more extended with weaker limb-darkening compared to pre-maximum light. Our Monte Carlo radiative transfer modeling shows that the second-epoch SPHERE-ZIMPOL data can be explained by a shell of 0.1 μm grains of Al2O3, Mg2SiO4, and MgSiO3 with a 550 nm optical depth of 0.6 ± 0.2 and an inner and outer radii of 1.3 R⋆ and 10 ± 2R⋆, respectively. Our modeling suggests the predominance of small (0
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Mary Hokazono
Full Text Available CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD detects stroke risk among children with sickle cell anemia (SCA. Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD genotypes and correlate the time-averaged maximum mean (TAMM velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia; and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia. TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05. RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6% and five conditional results (8.1% in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics.
Hu, Kaifeng; Ellinger, James J; Chylla, Roger A; Markley, John L
2011-12-15
Time-zero 2D (13)C HSQC (HSQC(0)) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC(0) spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero (1)H-(13)C HSQC(0) in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant-time mode. Semiautomatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semiautomated gsHSQC(0) with those obtained by the original manual phase-cycled HSQC(0) approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture.
B. Kayisoglu
2006-01-01
Full Text Available This study was conducted to investigate the effect of storage period and conditions on chemical properties of boiled grape juice (pekmez produced from the grape variety of Kınalı Yapıncak using classical and vacuum methods. Pekmez samples were stored in 250 cc volumed jars. Products obtained using two different production methods were stored for 10 months in room conditions and at +4 ºC temperature. Starting from the beginning of the storage, mineral analyses were repeated in every two months. Average copper, manganese, phosphorus, and sodium contents in pekmez samples produced by vacuum method were higher than by classical method at the end of storage period. But, calcium content in pekmez samples produced by classical method was higher than that of the other. Zinc, iron, and potassium contents in either vacuum method or classical method were not significantly different. In conclusion; in general, mineral contents were better in pekmez produced by vacuum method than classical method. Phosphor, sodium, potassium, calcium, copper, zinc and manganese contents were affected significantly by storage period, but iron was not. In addition, storage condition did not affect sodium, zinc and iron contents.
Minimum Length - Maximum Velocity
Panes, Boris
2011-01-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example we can predict the ratio between the minimum lengths in space and time using the results from OPERA about superluminal neutrinos.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
G. C. R. Garcia
2011-03-01
Full Text Available O concreto de cimento Portland é um dos materiais mais usados no mundo inteiro, entretanto, devido a sua estrutura ser muito complexa, torna-se imprescindível estudar suas propriedades com bastante profundidade. O concreto é produzido a partir de uma argamassa, de areia e cimento, com adição de agregados graúdos, sendo que suas propriedades estão basicamente suportadas nessa argamassa de constituição. O objetivo deste trabalho foi estudar a variação da rigidez de duas argamassas de composições com razão cimento:areia de 1:2 e 1:3 em função do tempo de cura, tendo como parâmetro a variação do módulo de Young. Os resultados mostraram que o módulo de Young cresce até atingir o valor máximo no oitavo dia, sendo que nos três primeiros dias esse crescimento é mais acentuado. A análise dos resultados indica que grande parte do processo de hidratação do cimento, com formação das ligações químicas responsáveis pela rigidez da argamassa, acontece nos primeiros dias de cura.Concrete produced with Portland cement is one of building materials most widely used worldwide. However, due to its highly complex structure, its properties require in-depth studies. Concrete is a mortar consisting of a mixture of cement, sand and coarse aggregates, and its properties are represented basically by the mortar base. The aim of this work was to study the change in stiffness of two mortar compositions cured at 25 ºC with a cement-to-sand ratio of 1:2 and 1:3, as a function of curing time using the variation of Young modulus as the measuring parameter. The results showed that Young modulus increases up to a maximum value on the 8th day, and that this increase is more pronounced during the first three days. An analysis of the results indicates that a large part of the cement hydration process, involving the formation of chemical bonds that are responsible for the mortar stiffness, takes place in the early days of curing.
程刘胜
2015-01-01
在合理布局井下无线网络基站的基础上，提出了一种基于多载波时频迭代的最大似然TOA（Time of Arrival）估计算法，通过将小数延时不断迭代来缩小估计误差，确定合适搜索步长，实现对信号的精确TOA估计。仿真结果表明：时频迭代的最大似然TOA估计算法具有更快的收敛速度；在信噪比较小时，采用时频迭代的最大似然TOA估计算法比经典TOA估计算法有效地提高了估计精度。%The influence of underground multipath, non-line of sight and the network time synchronization accuracy cause that delayed arrival time estimation deviation is bigger in the mining UWB high accuracy position system. This paper proposes a maximum likelihood TOA estimation algorithm based on multi-carrier time-frequency iteration by rationally distributing the underground wireless base stations to conform a suitable searching step length and find the exact TOA approximation estimation to the signal via fractional delay iterated to narrow the estimation error. The result shows that the time frequency iteration TOA estimation has a faster rate of convergence than the non-iteration algorithm.
Tastu, Julija; Pinson, Pierre; Madsen, Henrik
The emphasis in this work is placed on generating space-time trajectories (also referred to as scenarios) of wind power generation. This calls for prediction of multivariate densities describing wind power generation at a number of distributed locations and for a number of successive lead times. ...
Ohnaka, Keiichi; Hofmann, Karl-Heinz
2016-01-01
Our recent visible polarimetric images of the well-studied AGB star W Hya taken at pre-maximum light (phase 0.92) with VLT/SPHERE-ZIMPOL have revealed clumpy dust clouds close to the star at ~2 Rstar. We present second-epoch SPHERE-ZIMPOL observations of W Hya at minimum light (phase 0.54) in the continuum (645, 748, and 820 nm), in the Halpha line (656.3 nm), and in the TiO band (717 nm) as well as high-spectral resolution long-baseline interferometric observations in 2.3 micron CO lines with the AMBER instrument at the Very Large Telescope Interferometer (VLTI). The high-spatial resolution polarimetric images have allowed us to detect clear time variations in the clumpy dust clouds as close as 34--50~mas (1.4--2.0 Rstar) to the star. We detected the formation of a new dust cloud and the disappearance of one of the dust clouds detected at the first epoch. The Halpha and TiO emission extends to ~150 mas (~6 Rstar), and the Halpha images reveal time variations. The degree of linear polarization is higher at mi...
Yu, Chunxue; Yin, Xin'an; Yang, Zhifeng; Cai, Yanpeng; Sun, Tao
2016-09-01
The time step used in the operation of eco-friendly reservoirs has decreased from monthly to daily, and even sub-daily. The shorter time step is considered a better choice for satisfying downstream environmental requirements because it more closely resembles the natural flow regime. However, little consideration has been given to the influence of different time steps on the ability to simultaneously meet human and environmental flow requirements. To analyze this influence, we used an optimization model to explore the relationships among the time step, environmental flow (e-flow) requirements, and human water needs for a wide range of time steps and e-flow scenarios. We used the degree of hydrologic alteration to evaluate the regime's ability to satisfy the e-flow requirements of riverine ecosystems, and used water supply reliability to evaluate the ability to satisfy human needs. We then applied the model to a case study of China's Tanghe Reservoir. We found four efficient time steps (2, 3, 4, and 5 days), with a remarkably high water supply reliability (around 80%) and a low alteration of the flow regime (human needs under several e-flow scenarios. Our results show that adjusting the time step is a simple way to improve reservoir operation performance to balance human and e-flow needs.
Study on Calculation Model of Time Producing Runoff in Winter Wheat Farmland%麦田降雨产流时间计算模型研究
刘战东; 高阳; 巩文军; 段爱旺
2012-01-01
In order to study the effects of several influence factors on time producing runoff and the corresponding quantitative relationships, the influence of rainfall intensity, canopy cover and initial soil moisture profile on the time producing runoff in the winter wheat farmland was studied by simulated rainfall. The results indicated that in the same initial soil moisture profile conditions, the rainfall intensity and time producing runoff was a significant power function (P〈0.01). Correlation function between Leaf area index (LAI) and time producing runoff was linear, and reached a significant level, but plant height associated poorly with time producing runoff. The soil moisture at 0~20 cm, 0~40 cm showed more obvious positive linear correlation with time producing runoff (P〈0.01), the effect of initial soil moisture below 40 cm on the time producing runoff was relatively smaller. Considering all factors, through multiple regression analysis, calculation power of function model for time producing runoff in the winter wheat farmland was established. Upon examination, the model had good simulation results.%为探讨多种因素对麦田降雨产流时间的影响及其相应的定量关系，通过模拟降雨试验，研究了麦田降雨强度、冠层覆盖及初始土壤剖面含水量对降雨产流时间的影响。结果表明，在相同初始土壤剖面含水状态条件下，降雨强度与产流时间呈显著幂函数关系（P〈0.01）；产流时间与叶面积指数呈极显著线性正相关（P〈0.01），与株高相关性较差； 0~20、0~40 cm土层初始土壤剖面含水量与产流时间具有较明显的线性正相关关系（P〈0.01），40 cm以下土层土壤含水量对产流时间影响相对较小。通过多元回归分析建立的产流时间幂函数计算模型具有较好的模拟效果。
Tastu, J.; Pinson, P.; Madsen, Henrik
2013-09-01
The emphasis in this work is placed on generating space-time trajectories (also referred to as scenarios) of wind power generation. This calls for prediction of multivariate densities describing wind power generation at a number of distributed locations and for a number of successive lead times. A modelling approach taking advantage of sparsity of precision matrices is introduced for the description of the underlying space-time dependence structure. The proposed parametrization of the dependence structure accounts for such important process characteristics as non-constant conditional precisions and direction-dependent cross-correlations. Accounting for the space-time effects is shown to be crucial for generating high quality scenarios. (Author)
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
Time-dependent H-like and He-like Al lines produced by ultra-short pulse laser
Kato, Takako; Kato, Masatoshi [National Inst. for Fusion Science, Nagoya (Japan); Shepherd, R.; Young, B.; More, R.; Osterheld, Al
1998-03-01
We have performed numerical modeling of time-resolved x-ray spectra from thin foil targets heated by the LLNL Ultra-short pulse (USP) laser. The targets were aluminum foils of thickness ranging from 250 A to 1250 A, heated with 120 fsec pulses of 400 nm light from the USP laser. The laser energy was approximately 0.2 Joules, focused to a 3 micron spot size for a peak intensity near 2 x 10{sup 19} W/cm{sup 2}. Ly{alpha} and He{alpha} lines were recorded using a 900 fsec x-ray streak camera. We calculate the effective ionization, recombination and emission rate coefficients including density effects for H-like and He-like aluminum ions using a collisional radiative model. We calculate time-dependent ion abundances using these effective ionization and recombination rate coefficients. The time-dependent electron temperature and density used in the calculation are based on an analytical model for the hydrodynamic expansion of the target foils. During the laser pulse the target is ionized. After the laser heating stops, the plasma begins to recombine. Using the calculated time dependent ion abundances and the effective emission rate coefficients, we calculate the time dependent Ly{alpha} and He{alpha} lines. The calculations reproduce the main qualitative features of the experimental spectra. (author)
Lambert, Max R.
2015-01-01
In amphibians, abnormal metamorph sex ratios and sexual development have almost exclusively been considered in response to synthetic compounds like pesticides or pharmaceuticals. However, endocrine-active plant chemicals (i.e. phytoestrogens) are commonly found in agricultural and urban waterways hosting frog populations with deviant sexual development. Yet the effects of these compounds on amphibian development remain predominantly unexplored. Legumes, like clover, are common in agricultural fields and urban yards and exude phytoestrogen mixtures from their roots. These root exudates serve important ecological functions and may also be a source of phytoestrogens in waterways. I show that clover root exudate produces male-biased sex ratios and accelerates male metamorphosis relative to females in low and intermediate doses of root exudate. My results indicate that root exudates are a potential source of contaminants impacting vertebrate development and that humans may be cultivating sexual abnormalities in wildlife by actively managing certain plant species. PMID:27019728
无
2003-01-01
According to the theory given in the paper[1], the long time electrolysis experiment with titanium cathode in heavy water (D2O) were done for many times by using the open-loop multi-parameters electrolysis calorimetry system, which is established by us. The specialty is that the cathode is titanium rod and the anode is platinum wire. The early experiment result[3] is still repeated in our recent experiment. The obvious "excess heat" phenomenon can take place only when the electrolysis last more than ten days and amount of "excess heat" increased with the electrolysis time. The "excess heat" can also be obtained from the "boiling to dry" experiment. In the recent experiment, we obtain the results that the amount of "excess heat" is about 3.6 times the input energy, the "excess heat" power is 76.5 W, and the "excess heat" power density is 121.7 W/cm3. After the electrolysis, the crystal structure of the Ti cathode was measured with x-ray diffraction apparatus. We found that the crystal structure of Ti cathode was changed to face-centered cubic structure of TiD2 from its hexagonal structure. This result is in agreement with the Gou's theory mentioned in reference[1].
Neagoe, Cristian; Grecu, Bogdan; Manea, Liviu
2016-04-01
National Institute for Earth Physics (NIEP) operates a real time seismic network which is designed to monitor the seismic activity on the Romanian territory, which is dominated by the intermediate earthquakes (60-200 km) from Vrancea area. The ability to reduce the impact of earthquakes on society depends on the existence of a large number of high-quality observational data. The development of the network in recent years and an advanced seismic acquisition are crucial to achieving this objective. The software package used to perform the automatic real-time locations is Seiscomp3. An accurate choice of the Seiscomp3 setting parameters is necessary to ensure the best performance of the real-time system i.e., the most accurate location for the earthquakes and avoiding any false events. The aim of this study is to optimize the algorithms of the real-time system that detect and locate the earthquakes in the monitored area. This goal is pursued by testing different parameters (e.g., STA/LTA, filters applied to the waveforms) on a data set of representative earthquakes of the local seismicity. The results are compared with the locations from the Romanian Catalogue ROMPLUS.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Rodríguez, Alicia; Rodríguez, Mar; Luque, M. Isabel
2011-01-01
Ochratoxin A (OTA) is a mycotoxin synthesized by a variety of different fungi, most of them from the genera Penicillium and Aspergillus. Early detection and quantification of OTA producing species is crucial to improve food safety. In the present work, two protocols of real-time qPCR based on SYBR...
Cattani, Mirko; Maccarana, Laura; Hansen, Hanne Helene
2013-01-01
This experiment compared linear relationships among end-products of rumen fermentation measured at the time (t½) at which a feed produces half of its asymptotic gas production) or at 48 h. Meadow hay and corn grain were incubated for t½ (16 and 9 h, respectively) or for 48 h into glass bottles. E...
Cattani, Mirko; Maccarana, Laura; Hansen, Hanne Helene
2013-01-01
This experiment compared linear relationships among end-products of rumen fermentation measured at the time (t½) at which a feed produces half of its asymptotic gas production) or at 48 h. Meadow hay and corn grain were incubated for t½ (16 and 9 h, respectively) or for 48 h into glass bottles. E...
The analysis and kinetic energy balance of an upper-level wind maximum during intense convection
Fuelberg, H. E.; Jedlovec, G. J.
1982-01-01
The purpose of this paper is to analyze the formation and maintenance of the upper-level wind maximum which formed between 1800 and 2100 GMT, April 10, 1979, during the AVE-SESAME I period, when intense storms and tornadoes were experienced (the Red River Valley tornado outbreak). Radiosonde stations participating in AVE-SESAME I are plotted (centered on Oklahoma). National Meteorological Center radar summaries near the times of maximum convective activity are mapped, and height and isotach plots are given, where the formation of an upper-level wind maximum over Oklahoma is the most significant feature at 300 mb. The energy balance of the storm region is seen to change dramatically as the wind maximum forms. During much of its lifetime, the upper-level wind maximum is maintained by ageostrophic flow that produces cross-contour generation of kinetic energy and by the upward transport of midtropospheric energy. Two possible mechanisms for the ageostrophic flow are considered.
Effect of milling time on TiB2-Al2O3 composite produced by combustion synthesis
Aminikia, Behzad
2012-09-01
In this present research, TiB2-Al2O3 composite was fabricated by self-propagating hightemperature synthesis (SHS) of mechanically activated powders. H3BO3, TiO2 and Al as starting materials were mechanically activated for 1, 3, 6 and 9 h then pressed to form pellets. Green compacts were placed in a tube furnace with argon atmosphere, for synthesis. According to the XRD patterns showed that the TiB2-Al2O3 composite has been successfully fabricated by thermal explosion mode of combustion synthesis. Also, it was found that, 6 h is the optimum time for mechanical activation. That increasing milling time up to 9 h had no significant effect other than refining the crystallite sizes of the components specially TiB2.
Hatfield, M. C.; Webley, P. W.; Saiet, E., II
2015-12-01
Unmanned aerial systems (UAS) provide a unique capability for emergency management and real-time hazard assessment with access to hazardous environments that maybe off limits for manned aircraft while reducing the risk to personnel and loss of ground assets. When dealing with hazards, such as forest fires and volcanic eruptions, there is a need to assess the location of the fire/flow front and where best to assign ground personnel to reduce the risk to local populations and infrastructure. Thermal infrared cameras provide the ideal tool to detect subtle changes in the developing fire/flow front while providing data 24/7. There are limits to the detecting capabilities of these cameras given the wavelengths used and image resolution available. Given the large thermal contrast between the hot flow front and surrounding landscape then the data can be used to map out the location and changes seen as the front of the flow/fire advances. To map the complete hazard then either the UAS has to be flown at an altitude to capture the event in one image or the data has to be mosaiced together. Higher altitudes lead to coarser resolution imagery and therefore we will show how thermal infrared data can be mosaiced to provide the highest spatial resolution map of the hazard. We will present results using different UAS and thermal cameras including adding neutral density filters to detect hotter thermal targets. Timely generation of these mosaiced maps in a real-time environment is critical for those assessing the ongoing event and we will show how these maps can be generated quickly with the necessary spatial and thermal accuracy while discussing the requirements needed to generate thermal infrared maps of the hazardous events that are both useful for quick real-time assessment and also for further investigation in research projects.
2011-01-01
This paper uses variation created by parental deaths in the amount of time children spend with each parent to examine whether the parent-child correlation in schooling outcomes stems from a causal relationship. Using a large sample of Israeli children who lost one parent during childhood, we find a series of striking patterns which show that the relationship is largely causal. Relative to children who did not lose a parent, the education of the deceased parent is less important in determining...
Vanlinthout, L E; Booij, L H; van Egmond, J; Robertson, E N
1996-03-01
We have compared the ability of equipotent concentrations of isoflurane and sevoflurane to enhance the effect of non-depolarizing neuromuscular blocking drugs. Ninety ASA I and II patients of both sexes, aged 18-50 yr, were stratified into three blocker groups (Vec, Pan and Atr), to undergo neuromuscular block with vecuronium (n = 30), pancuronium (n = 30) or atracurium (n = 30), respectively. Within each group, patients were allocated randomly to one of three anaesthetic subgroups to undergo maintenance of anaesthesia with: (1) alfentanil-nitrous oxide-oxygen (n = 10); (2) alfentanil-nitrous oxide-oxygen-isoflurane (n = 10); or (3) alfentanil-nitrous oxide-oxygen-sevoflurane (n = 10) anaesthesia. During maintenance of anaesthesia, end-tidal concentrations of isoflurane, sevoflurane and nitrous oxide were 0.95, 1.70 and 70%, respectively. Both the evoked integrated electromyogram and mechanomyogram of the adductor pollicis brevis muscle were measured simultaneously. In the Vec and Pan groups, a total dose of 40 micrograms kg-1 of vecuronium or pancuronium, respectively, was given, and in the Atr group a total dose of atracurium 100 micrograms kg-1. Each blocker was given in four equal doses and administered cumulatively. We showed that 0.95% isoflurane and 1.70% sevoflurane (corresponding to 0.8 MAC of each inhalation anaesthetic, omitting the MAC contribution of nitrous oxide) augmented and prolonged the neuromuscular block produced by vecuronium, pancuronium and atracurium to a similar degree.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Jin F
2016-05-01
Full Text Available Feng Jin,1,2 Hui Zhu,2 Zheng Fu,3 Li Kong,2 Jinming Yu2 1School of Medicine and Life Sciences, University of Jinan-Shandong Academy of Medical Sciences, 2Department of Radiation Oncology, Shandong Cancer Hospital Affiliated to Shandong University, Shandong Academy of Medical Sciences, 3Department of Nuclear Medicine, Shandong Cancer Hospital Affiliated to Shandong University, Shandong Academy of Medical Sciences, Jinan, People’s Republic of China Purpose: The purpose of this study was to investigate the prognostic value of the standardized uptake value maximum (SUVmax change calculated by dual-time-point 18F-fluorodeoxyglucose positron emission tomography (PET imaging in patients with advanced non-small-cell lung cancer (NSCLC.Patients and methods: We conducted a retrospective review of 115 patients with advanced NSCLC who underwent pretreatment dual-time-point 18F-fluorodeoxyglucose PET acquired at 1 and 2 hours after injection. The SUVmax from early images (SUVmax1 and SUVmax from delayed images (SUVmax2 were recorded and used to calculate the SUVmax changes, including the SUVmax increment (ΔSUVmax and percent change of the SUVmax (%ΔSUVmax. Progression-free survival (PFS and overall survival (OS were determined by the Kaplan–Meier method and were compared with the studied PET parameters, and the clinicopathological prognostic factors in univariate analyses and multivariate analyses were constructed using Cox proportional hazards regression.Results: One hundred and fifteen consecutive patients were reviewed, and the median follow-up time was 12.5 months. The estimated median PFS and OS were 3.8 and 9.6 months, respectively. In univariate analysis, SUVmax1, SUVmax2, ΔSUVmax, %ΔSUVmax, clinical stage, and Eastern Cooperative Oncology Group (ECOG scores were significant prognostic factors for PFS. Similar results were significantly correlated with OS, except %ΔSUVmax. In multivariate analysis, ΔSUVmax and %ΔSUVmax were significant
Estimating the maximum potential revenue for grid connected electricity storage :
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01
The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the
2D Time-lapse Resistivity Monitoring of an Organic Produced Gas Plume in a Landfill using ERT.
Amaral, N. D.; Mendonça, C. A.; Doherty, R.
2014-12-01
This project has the objective to study a landfill located on the margins of Tietê River, in São Paulo, Brazil, using the electroresistivity tomography method (ERT). Due to huge organic matter concentrations in the São Paulo Basin quaternary sediments, there is subsurface depth related biogas accumulation (CH4 and CO2), induced by anaerobic degradation of the organic matter. 2D resistivity sections were obtained from a test area since March 2012, a total of 7 databases, being the last one dated from October 2013. The studied line has the length of 56m, the electrode interval is of 2m. In addition, there are two boreholes along the line (one with 3 electrodes and the other one with 2) in order to improve data quality and precision. The boreholes also have a multi-level sampling system that indicates the fluid (gas or water) presence in relation to depth. With our results it was possible to map the gas plume position and its area of extension in the sections as it is a positive resistivity anomaly, with the gas level having approximately 5m depth. With the time-lapse analysis (Matlab script) between the obtained 2D resistivity sections from the site, it was possible to map how the biogas volume and position change in the landfill in relation to time. Our preliminary results show a preferential gas pathway through the subsurface studied area. A consistent relation between the gas depth and obtained microbiological data from archea and bacteria population was also observed.
Hodgson, Dominic A.; Graham, Alastair G. C.; Roberts, Stephen J.; Bentley, Michael J.; Cofaigh, Colm Ó.; Verleyen, Elie; Vyverman, Wim; Jomelli, Vincent; Favier, Vincent; Brunstein, Daniel; Verfaillie, Deborah; Colhoun, Eric A.; Saunders, Krystyna M.; Selkirk, Patricia M.; Mackintosh, Andrew; Hedding, David W.; Nel, Werner; Hall, Kevin; McGlone, Matt S.; Van der Putten, Nathalie; Dickens, William A.; Smith, James A.
2014-09-01
This paper is the maritime and sub-Antarctic contribution to the Scientific Committee for Antarctic Research (SCAR) Past Antarctic Ice Sheet Dynamics (PAIS) community Antarctic Ice Sheet reconstruction. The overarching aim for all sectors of Antarctica was to reconstruct the Last Glacial Maximum (LGM) ice sheet extent and thickness, and map the subsequent deglaciation in a series of 5000 year time slices. However, our review of the literature found surprisingly few high quality chronological constraints on changing glacier extents on these timescales in the maritime and sub-Antarctic sector. Therefore, in this paper we focus on an assessment of the terrestrial and offshore evidence for the LGM ice extent, establishing minimum ages for the onset of deglaciation, and separating evidence of deglaciation from LGM limits from those associated with later Holocene glacier fluctuations. Evidence included geomorphological descriptions of glacial landscapes, radiocarbon dated basal peat and lake sediment deposits, cosmogenic isotope ages of glacial features and molecular biological data. We propose a classification of the glacial history of the maritime and sub-Antarctic islands based on this assembled evidence. These include: (Type I) islands which accumulated little or no LGM ice; (Type II) islands with a limited LGM ice extent but evidence of extensive earlier continental shelf glaciations; (Type III) seamounts and volcanoes unlikely to have accumulated significant LGM ice cover; (Type IV) islands on shallow shelves with both terrestrial and submarine evidence of LGM (and/or earlier) ice expansion; (Type V) Islands north of the Antarctic Polar Front with terrestrial evidence of LGM ice expansion; and (Type VI) islands with no data. Finally, we review the climatological and geomorphological settings that separate the glaciological history of the islands within this classification scheme.
Time-dependent of characteristics of Cu/CuS/n-GaAs/In structure produced by SILAR method
Sağlam, M.; Güzeldir, B., E-mail: msaglam@atauni.edu.tr
2016-09-15
Highlights: • The CuS thin film used at Cu/n-GaAs structure is grown by SILAR method. • There has been no report on ageing of characteristics of this junction in the literature. • The properties of Cu/CuS/n-GaAs/In structure are examined with different methods. • It has been shown that Cu/CuS/n-GaAs/In structure has a stable interface. - Abstract: The aim of this study is to explain effects of the ageing on the electrical properties of Cu/n-GaAs Shottky barrier diode with Copper Sulphide (CuS) interfacial layer. CuS thin films are deposited on n-type GaAs substrate by Successive Ionic Layer Adsorption and Reaction (SILAR) method at room temperature. The structural and the morphological properties of the films have been carried out by Scanning Electron Microscopy (SEM) and X-Ray Diffraction (XRD) techniques. The XRD analysis of as-grown films showed the single-phase covellite, with hexagonal crystal structure built around two preferred orientations corresponding to (102) and (108) atomic planes. The ageing effects on the electrical properties of Cu/CuS/n-GaAs/In structure have been investigated. The current–voltage (I–V) measurements at room temperature have been carried out to study the change in electrical characteristics of the devices as a function of ageing time. The main electrical parameters, such as ideality factor (n), barrier height (Φ{sub b}), series resistance (R{sub s}), leakage current (I{sub 0}), and interface states (N{sub ss}) for this structure have been calculated. The results show that the main electrical parameters of device remained virtually unchanged.
Perez-Bustamante, R. [Centro de Investigacion en Materiales Avanzados (CIMAV), Laboratorio Nacional de Nanotecnologia, Miguel de Cervantes No.120, C.P. 31109, Chihuahua, Chih. (Mexico); Perez-Bustamante, F. [Universidad Autonoma de Chihuahua (UACH), Facultad de Ingenieria, Circuito No. 1 Nuevo Campus Universitario, C.P. 31125, Chihuahua, Chih. (Mexico); Estrada-Guel, I. [Centro de Investigacion en Materiales Avanzados (CIMAV), Laboratorio Nacional de Nanotecnologia, Miguel de Cervantes No.120, C.P. 31109, Chihuahua, Chih. (Mexico); Licea-Jimenez, L. [Centro de Investigacion en Materiales Avanzados S.C. (CIMAV), Unidad Mty, Autopista Monterrey-Aeropuerto Km 10, A. P. 43, C.P. 66600, Apodaca, N.L. (Mexico); Miki-Yoshida, M. [Centro de Investigacion en Materiales Avanzados (CIMAV), Laboratorio Nacional de Nanotecnologia, Miguel de Cervantes No.120, C.P. 31109, Chihuahua, Chih. (Mexico); Martinez-Sanchez, R., E-mail: roberto.martiez@cimav.edu.mx [Centro de Investigacion en Materiales Avanzados (CIMAV), Laboratorio Nacional de Nanotecnologia, Miguel de Cervantes No.120, C.P. 31109, Chihuahua, Chih. (Mexico)
2013-01-15
Carbon nanotube/2024 aluminum alloy (CNT/Al{sub 2024}) composites were fabricated with a combination of mechanical alloying (MA) and powder metallurgy routes. Composites were microstructurally and mechanically evaluated at sintering condition. A homogeneous dispersion of CNTs in the Al matrix was observed by a field emission scanning electron microscopy. High-resolution transmission electron microscopy confirmed not only the presence of well dispersed CNTs but also needle-like shape aluminum carbide (Al{sub 4}C{sub 3}) crystals in the Al matrix. The formation of Al{sub 4}C{sub 3} was suggested as the interaction between the outer shells of CNTs and the Al matrix during MA process in which crystallization took place after the sintering process. The mechanical behavior of composites was evaluated by Vickers microhardness measurements indicating a significant improvement in hardness as function of the CNT content. This improvement was associated to a homogeneous dispersion of CNTs and the presence of Al{sub 4}C{sub 3} in the aluminum alloy matrix. - Highlights: Black-Right-Pointing-Pointer The 2024 aluminum alloy was reinforced by CNTs by mechanical alloying process. Black-Right-Pointing-Pointer Composites were microstructural and mechanically evaluated after sintering condition. Black-Right-Pointing-Pointer The greater the CNT concentration, the greater the hardness of the composites. Black-Right-Pointing-Pointer Higher hardness in composites is achieved at 20 h of milling. Black-Right-Pointing-Pointer The formation of Al{sub 4}C{sub 3} does not present a direct relationship with the milling time.
Pineux, N.; Lisein, J.; Swerts, G.; Bielders, C. L.; Lejeune, P.; Colinet, G.; Degré, A.
2017-03-01
Erosion and deposition modelling should rely on field data. Currently these data are seldom available at large spatial scales and/or at high spatial resolution. In addition, conventional erosion monitoring approaches are labour intensive and costly. This calls for the development of new approaches for field erosion data acquisition. As a result of rapid technological developments and low cost, unmanned aerial vehicles (UAV) have recently become an attractive means of generating high resolution digital elevation models (DEMs). The use of UAV to observe and quantify gully erosion is now widely established. However, in some agro-pedological contexts, soil erosion results from multiple processes, including sheet and rill erosion, tillage erosion and erosion due to harvest of root crops. These diffuse erosion processes often represent a particular challenge because of the limited elevation changes they induce. In this study, we propose to assess the reliability and development perspectives of UAV to locate and quantify erosion and deposition in a context of an agricultural watershed with silt loam soils and a smooth relief. Erosion and deposition rates derived from high resolution DEM time series are compared to field measurements. The UAV technique demonstrates a high level of flexibility and can be used, for instance, after a major erosive event. It delivers a very high resolution DEM (pixel size: 6 cm) which allows us to compute high resolution runoff pathways. This could enable us to precisely locate runoff management practices such as fascines. Furthermore, the DEMs can be used diachronically to extract elevation differences before and after a strongly erosive rainfall and be validated by field measurements. While the analysis for this study was carried out over 2 years, we observed a tendency along the slope from erosion to deposition. Erosion and deposition patterns detected at the watershed scale are also promising. Nevertheless, further development in the
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Denschlag, Carla; Rieder, Johann; Vogel, Rudi F; Niessen, Ludwig
2014-05-02
Trichothecene mycotoxins such as deoxynivaneol (DON), nivalenol (NIV) and T2-Toxin are produced by a variety of Fusarium spp. on cereals in the field and may be ingested by consumption of commodities and products made thereof. The toxins inhibit eukaryotic protein biosynthesis and may thus impair human and animal health. Aimed at rapid and sensitive detection of the most important trichothecene producing Fusarium spp. in a single analysis, a real-time duplex loop-mediated isothermal amplification (LAMP) assay was set up. Two sets of LAMP primers were designed independently to amplify a partial sequence of the tri6 gene in Fusarium (F.) graminearum and of the tri5 gene in Fusarium sporotrichioides, respectively. Each of the two sets detected a limited number of the established trichothecene producing Fusarium-species. However, combination of the two sets in one duplex assay enabled detection of F. graminearum, Fusarium culmorum, Fusarium cerealis, F. sporotrichioides, Fusarium langsethiae and Fusarium poae in a group specific manner. No cross reactions were detected with purified DNA from 127 other fungal species or with cereal DNA. To demonstrate the usefulness of the assay, 100 wheat samples collected from all over the German state of Bavaria were analyzed for the trichothecene mycotoxin DON by HPLC and for the presence of trichothecene producers by the new real-time duplex LAMP assay in parallel analyses. The LAMP assay showed positive results for all samples with a DON concentration exceeding 163ppb. The major advantage of the duplex LAMP assay is that the presence of six of the major trichothecene producing Fusarium spp. can be detected in a rapid and user-friendly manner with only one single assay. To our knowledge this is the first report of the use of a multiplex LAMP assay for fungal organisms.
Samanipour, Saer; Langford, Katherine; Reid, Malcolm J; Thomas, Kevin V
2016-09-09
Gas chromatography coupled with high resolution time of flight mass spectrometry (GC-HR-TOFMS) has gained popularity for the target and suspect analysis of complex samples. However, confident detection of target/suspect analytes in complex samples, such as produced water, remains a challenging task. Here we report on the development and validation of a two stage algorithm for the confident target and suspect analysis of produced water extracts. We performed both target and suspect analysis for 48 standards, which were a mixture of 28 aliphatic hydrocarbons and 20 alkylated phenols, in 3 produced water extracts. The two stage algorithm produces a chemical standard database of spectra, in the first stage, which is used for target and suspect analysis during the second stage. The first stage is carried out through five steps via an algorithm here referred to as unique ion extractor (UIE). During the first step the m/z values in the spectrum of a standard that do not belong to that standard are removed in order to produce a clean spectrum and then during the last step the cleaned spectrum is calibrated. The Dot-product algorithm, during the second stage, uses the cleaned and calibrated spectra of the standards for both target and suspect analysis. We performed the target analysis of 48 standards in all 3 samples via conventional methods, in order to validate the two stage algorithm. The two stage algorithm was demonstrated to be more robust, reliable, and less sensitive to the signal-to-noise ratio (S/N), when compared to the conventional method. The Dot-product algorithm showed lower potential in producing false positives compared to the conventional methods, when dealing with complex samples. We also evaluated the effect of the mass accuracy on the performances of Dot-product algorithm. Our results indicated the crucial importance of HR-MS data and the mass accuracy for confident suspect analysis in complex samples.
Design of Time Format Code IRIG-B Code Producing Demodulation Circuit%IRIG-B时间格式码产生解调电路的设计
周国平; 屈少君; 韩亮
2012-01-01
Time format code IRIG-B code is a kind of time code standard used in time Synchronization of different systems, which is published by IRIG organization of the United States. As a kind of the international and general time code, IRIG-B code has been widely used in time information transmission. In this paper, time format code IRIG-B code s format and standard were introduced, also the design of time format code IRIG-B code producing demodulation circuit Applied to time setting terminals and its corresponding software program flow are given.%IRIG-B码作为一种国际通用的时间编码,在时间信息传输方面已得到广泛应用.通过介绍IRIG-B格式码的格式与规范,给出了应用于对时终端的IRIG-B格式时间码,产生解调电路的设计及相应的软件程序流程.
Trimpin, Sarah; Ren, Yue; Wang, Beixi; Lietz, Christopher B; Richards, Alicia L; Marshall, Darrell D; Inutan, Ellen D
2011-07-15
A new matrix compound, 2-nitrophloroglucinol, is reported which not only produces highly charged ions similar to electrospray ionization (ESI) under atmospheric pressure (AP) and intermediate pressure (IP) laserspray ionization (LSI) conditions but also the most highly charged ions so far observed for small proteins in mass spectrometry (MS) under high vacuum (HV) conditions. This new matrix extends the compounds that can successfully be employed as matrixes with LSI, as demonstrated on an LTQ Velos (Thermo) at AP, a matrix-assisted laser desorption/ionization (MALDI)-ion mobility spectrometry (IMS) time-of-flight (TOF) SYNAPT G2 (Waters) at IP, and MALDI-TOF Ultraflex, UltrafleXtreme, and Autoflex Speed (Bruker) mass spectrometers at HV. Measurements show that stable multiple charged molecular ions of proteins are formed under all pressure conditions indicating softer ionization than MALDI, which suffers a high degree of metastable fragmentation when multiply charged ions are produced. An important analytical advantage of this new LSI matrix are the potential for high sensitivity equivalent or better than AP-LSI and vacuum MALDI and the potential for enhanced mass selected fragmentation of the abundant highly charged protein ions. A second new LSI matrix, 4,6-dinitropyrogallol, produces abundant multiply charged ions at AP but not under HV conditions. The differences in these similar compounds ability to produce multiply charged ions under HV conditions is believed to be related to their relative ability to evaporate from charged matrix/analyte clusters.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Bjornsdottir-Butler, Kristin; Jones, Jessica L; Benner, Ronald A; Burkhardt, William
2011-10-01
Quantification of histamine-producing bacteria (HPB) is necessary in order to elucidate the role that HPB play in scombrotoxin (histamine) fish poisoning. We report here the evaluation of a real-time PCR method for the quantification of total and specific Gram-negative HPB species in fish using a most probable number (MPN) format. The species-specific real-time PCR assay was 100% inclusive for independently detecting Morganella morganii, Enterobacter aerogenes, Raoultella planticola/ornithinolytica and Photobacterium damselae and did not cross react with other histamine- or non- histamine-producing bacteria. The efficiency of the reactions in the absence and presence of Spanish mackerel enrichment containing 1 × 10(6) CFU/ml of background microflora were 93-104 and 92-99%, respectively. The MPN-real-time PCR assay accurately quantified total and specific HPB in spiked mahi-mahi (Coryphaena hippurus) and Spanish mackerel (Scomberomorus maculates) samples. These methods were used to quantify total and specific HPB in naturally contaminated, decomposing mahi-mahi, Spanish mackerel and tuna (Thunnus albacares) samples. The results of this study indicate that MPN-real-time PCR assays can be used to accurately enumerate total and specific HPB in fish samples. These assays can be applied to assess the effectiveness of mitigation strategies and understand the relationship between HPB and histamine production in decomposing fish.
Murphy, Helen R; Lee, Seulgi; da Silva, Alexandre J
2017-07-01
Cyclospora cayetanensis is a protozoan parasite that causes human diarrheal disease associated with the consumption of fresh produce or water contaminated with C. cayetanensis oocysts. In the United States, foodborne outbreaks of cyclosporiasis have been linked to various types of imported fresh produce, including cilantro and raspberries. An improved method was developed for identification of C. cayetanensis in produce at the U.S. Food and Drug Administration. The method relies on a 0.1% Alconox produce wash solution for efficient recovery of oocysts, a commercial kit for DNA template preparation, and an optimized TaqMan real-time PCR assay with an internal amplification control for molecular detection of the parasite. A single laboratory validation study was performed to assess the method's performance and compare the optimized TaqMan real-time PCR assay and a reference nested PCR assay by examining 128 samples. The samples consisted of 25 g of cilantro or 50 g of raspberries seeded with 0, 5, 10, or 200 C. cayetanensis oocysts. Detection rates for cilantro seeded with 5 and 10 oocysts were 50.0 and 87.5%, respectively, with the real-time PCR assay and 43.7 and 94.8%, respectively, with the nested PCR assay. Detection rates for raspberries seeded with 5 and 10 oocysts were 25.0 and 75.0%, respectively, with the real-time PCR assay and 18.8 and 68.8%, respectively, with the nested PCR assay. All unseeded samples were negative, and all samples seeded with 200 oocysts were positive. Detection rates using the two PCR methods were statistically similar, but the real-time PCR assay is less laborious and less prone to amplicon contamination and allows monitoring of amplification and analysis of results, making it more attractive to diagnostic testing laboratories. The improved sample preparation steps and the TaqMan real-time PCR assay provide a robust, streamlined, and rapid analytical procedure for surveillance, outbreak response, and regulatory testing of foods for
Zhang, Guodong; Brown, Eric W.; González-Escalona, Narjol
2011-01-01
Contamination of foods, especially produce, with Salmonella spp. is a major concern for public health. Several methods are available for the detection of Salmonella in produce, but their relative efficiency for detecting Salmonella in commonly consumed vegetables, often associated with outbreaks of food poisoning, needs to be confirmed. In this study, the effectiveness of three molecular methods for detection of Salmonella in six produce matrices was evaluated and compared to the FDA microbiological detection method. Samples of cilantro (coriander leaves), lettuce, parsley, spinach, tomato, and jalapeno pepper were inoculated with Salmonella serovars at two different levels (105 and Bacteriological Analytical Manual) and by three molecular methods: quantitative real-time PCR (qPCR), quantitative reverse transcriptase real-time PCR (RT-qPCR), and loop-mediated isothermal amplification (LAMP). Comparable results were obtained by these four methods, which all detected as little as 2 CFU of Salmonella cells/25 g of produce. All control samples (not inoculated) were negative by the four methods. RT-qPCR detects only live Salmonella cells, obviating the danger of false-positive results from nonviable cells. False negatives (inhibition of either qPCR or RT-qPCR) were avoided by the use of either a DNA or an RNA amplification internal control (IAC). Compared to the conventional culture method, the qPCR, RT-qPCR, and LAMP assays allowed faster and equally accurate detection of Salmonella spp. in six high-risk produce commodities. PMID:21803916
Zhang, Guodong; Brown, Eric W; González-Escalona, Narjol
2011-09-01
Contamination of foods, especially produce, with Salmonella spp. is a major concern for public health. Several methods are available for the detection of Salmonella in produce, but their relative efficiency for detecting Salmonella in commonly consumed vegetables, often associated with outbreaks of food poisoning, needs to be confirmed. In this study, the effectiveness of three molecular methods for detection of Salmonella in six produce matrices was evaluated and compared to the FDA microbiological detection method. Samples of cilantro (coriander leaves), lettuce, parsley, spinach, tomato, and jalapeno pepper were inoculated with Salmonella serovars at two different levels (10(5) and Bacteriological Analytical Manual) and by three molecular methods: quantitative real-time PCR (qPCR), quantitative reverse transcriptase real-time PCR (RT-qPCR), and loop-mediated isothermal amplification (LAMP). Comparable results were obtained by these four methods, which all detected as little as 2 CFU of Salmonella cells/25 g of produce. All control samples (not inoculated) were negative by the four methods. RT-qPCR detects only live Salmonella cells, obviating the danger of false-positive results from nonviable cells. False negatives (inhibition of either qPCR or RT-qPCR) were avoided by the use of either a DNA or an RNA amplification internal control (IAC). Compared to the conventional culture method, the qPCR, RT-qPCR, and LAMP assays allowed faster and equally accurate detection of Salmonella spp. in six high-risk produce commodities.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Komprda, T; Sládková, P; Dohnal, V
2009-11-01
Sixteen types of dry fermented sausages were commercially produced as combinations of two producers (designated K and R), two starter cultures (Pediococcus pentosaceus, C; Lactobacillus curvatus+Staphylococcus carnosus, F), two spicing mixtures (H; P) and two casing diameters (4.5cm, T; 7cm, W), and were sampled at days zero, 14, 28 (end of ripening), 49, 70, 91 and 112 (samples were stored at 15°C and relative humidity of 70% between days 28 and 112). Tyramine and putrescine content (Y, mgkg(-1)) increased (Psausages as compared to the W, H and F counterparts, respectively; content of both amines was lower (Psausages than in the R-sausages. Tyramine content in the sausages at the time interval 28days of ripening+21days of storage was in the range from 170 (KHCU sausage combination) to 382 (RHFS) mgkg(-1).
Habermehl M. A.
2012-04-01
Full Text Available Groundwater contains dissolved He, and its concentration increases with the residence time of the groundwater. Thus, if the 4He accumulation rate is constant, the dissolved 4He concentration in ground-water is equivalent to the residence time. Since accumulation mechanisms are not easily separated in the field, we estimate the total He accumulation rate during the half-life of 36Cl (3.01 × 105 years. We estimated the 4He accumulation rate, calibrated using both cosmogenic and subsurface-produced 36Cl, in the Great Artesian Basin (GAB, Australia, and the subsurface-produced 36Cl increase at the Äspö Hard Rock Laboratory, Sweden. 4He accumulation rates range from (1.9±0.3 × 10−11 to (15±6 × 10−11 ccSTP·cm−3·y−1 in GAB and (1.8 ±0.7 × 10−8 ccSTP·cm−3·y−1 at Äspö. We confirmed a ground-water flow with a residence time of 0.7-1.06 Ma in GAB and stagnant groundwater with the long residence time of 4.5 Ma at Äspö. Therefore, the groundwater residence time can be deduced from the dissolved 4He concentration and the 4He accumulation rate calibrated by 36Cl, provided that 4He accumulation, groundwater flow, and other geo-environmental conditions have remained unchanged for the required amount of geological time.
Luchko, Yuri; Povstenko, Yuriy
2012-01-01
In this paper, the one-dimensional time-fractional diffusion-wave equation with the fractional derivative of order $1 \\le \\alpha \\le 2$ is revisited. This equation interpolates between the diffusion and the wave equations that behave quite differently regarding their response to a localized disturbance: whereas the diffusion equation describes a process, where a disturbance spreads infinitely fast, the propagation speed of the disturbance is a constant for the wave equation. For the time fractional diffusion-wave equation, the propagation speed of a disturbance is infinite, but its fundamental solution possesses a maximum that disperses with a finite speed. In this paper, the fundamental solution of the Cauchy problem for the time-fractional diffusion-wave equation, its maximum location, maximum value, and other important characteristics are investigated in detail. To illustrate analytical formulas, results of numerical calculations and plots are presented. Numerical algorithms and programs used to produce pl...
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Saint-Dizier, M; Legendre, A-C; Driancourt, M-A; Chastant-Maillard, S
2014-12-01
Luteolysis before the time of maternal recognition of pregnancy is one cause of low fertility in high-producing dairy cows. The objective of this study was to assess whether induction of a secondary corpus luteum (CL) late in the luteal phase would delay the time of luteolysis. Twenty high-producing Holstein cows were synchronized to ovulation (Day 0) with the Ovsynch protocol and received hCG (1500 IU im) on Day 12. Corpora lutea formation (as evaluated by ultrasonography) and plasma P4 concentrations were monitored from Days 4 to 36. hCG treatment induced the formation of one secondary CL (CL2) in 11 of 20 cows (55%) from the dominant follicle (mean diameter: 14.2 ± 0.9 mm) of two-wave (3/11) and three-wave (8/11) cycles. The maximal diameter of the CL2 (23.3 ± 1.9 mm) was reached approximately 6 days after hCG treatment and was correlated with its structural lifespan (p Day 14 (+4.5 ng/ml) and Day 18 (+3.0 ng/ml) compared with cows without CL2 (p days after that of the CL1, and the median time at which the first drop in circulating P4 levels occurred was later in cows that formed a CL2 than in those that did not (Day 26 vs Day 18; p Day 12 might reduce the risk of premature luteolysis in high-producing dairy cows after insemination.
Carla Aparecida Cielo
2012-06-01
Full Text Available OBJETIVO: verificar e correlacionar os tempos máximos de fonação (TMF de vogais, a capacidade vital (CV e os tipos de afecções laríngeas (AL de mulheres com disfonia organofuncional (DOF. MÉTODO: pesquisa retrospectiva, transversal, exploratória, não experimental, quantitativa, com banco de dados de medidas de TMF [a, i, u], de CV e de AL de mulheres com DOF; e os testes estatísticos Qui- quadrado e exato de Fisher, para verificar as diferenças entre as variáveis e suas relações e o teste binomial, a fim de verificar a significância de proporção ou percentual da análise descritiva, com pPURPOSE: to determine and to correlate the maximum phonation times (MPT of vowels, vital capacity (VC and laryngeal disorders (LD for women with benign organic lesions resulting from vocal misuse or abuse (BOL. METHOD: retrospective, transverse, exploratory, non-experimental, quantitative study, with measurement database of MPT [a, i, u], VC and LD of women with BOL, and Chi-Square statistic and exact tests of Fisher in order to investigate the differences between the variables and their relationships and a binomial test in order to check the significance of proportion or percentage of descriptive analysis, with p<0.05. RESULTS: the majority (22; 75.86% showed MPT significantly reduced (p = 0.0053 and seven (24.14% normal MPT. The normal VC was statistically significant (p = 0.0001 (26; 89.66%, but three women (10.34% showed it to be reduced. There was significant dominance of vocal nodules (p = 0.0016 (22; 75.86%, followed by Reinke's edema (6, 20.69% and vocal polyp (1; 3.45%. Among the 22 woman (75.86% which showed reduced MPT, there was a predominance with normal VC (19; 86.36%, although no statistical significance (p = 0,558. All the individuals with normal MPT showed VC normal (7; 100%. The majority with BOL showed normal VC, although not statistically significant (p=0,199. There was a predominance of vocal nodules and reduced MPT (16; 72
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
María José Ibarrola-Rivas
2016-12-01
Full Text Available Work is one of the main inputs in agriculture. It can be performed by humans, animals, or machinery. Studies have shown strong differences throughout the world in labour required to produce a kilogram of food. We complement this line of research by linking these data to food consumption patterns, which are also strongly different throughout the world. We calculate the hours of farm labour required to produce a person’s annual food consumption for four scenarios. These scenarios are comprised of two extreme cases for production systems and diets, respectively, that illustrate prevailing global differences. Our results show that the farm labour requirements differ by a factor of about 200 among production systems, and by a factor of about two among consumption patterns. The gain in farm labour efficiency with mechanization is enormous: only 2–5 hours of farm labour are needed to produce the food consumed by a person in a year. This value is much lower than the time an average person spends on buying food, cooking, or eating.
Schury, P.; Wada, M.; Ito, Y.; Kaji, D.; Arai, F.; MacCormick, M.; Murray, I.; Haba, H.; Jeong, S.; Kimura, S.; Koura, H.; Miyatake, H.; Morimoto, K.; Morita, K.; Ozawa, A.; Rosenbusch, M.; Reponen, M.; Söderström, P.-A.; Takamine, A.; Tanaka, T.; Wollnik, H.
2017-01-01
Using a multireflection time-of-flight mass spectrograph located after a gas cell coupled with the gas-filled recoil ion separator GARIS-II, the masses of several α -decaying heavy nuclei were directly and precisely measured. The nuclei were produced via fusion-evaporation reactions and separated from projectilelike and targetlike particles using GARIS-II before being stopped in a helium-filled gas cell. Time-of-flight spectra for three isobar chains, 204Fr-204Rn-204At-204Po , 205Fr-205Rn-205At-205Po-205Bi , and 206Fr-206Rn-206At , were observed. Precision atomic mass values were determined for Fr-206204, Rn,205204, and At,205204. Identifications of 205Bi, Po,205204, 206Rn, and 206At were made with N ≲10 detected ions, representing the next step toward use of mass spectrometry to identify exceedingly low-yield species such as superheavy element ions.
Arias, Cesar A; Singh, Kavindra V; Panesso, Diana; Murray, Barbara E
2007-06-01
Ceftobiprole (BAL9141) is an investigational cephalosporin with broad in vitro activity against gram-positive cocci, including enterococci. Ceftobiprole MICs were determined for 93 isolates of Enterococcus faecalis (including 16 beta-lactamase [Bla] producers and 17 vancomycin-resistant isolates) by an agar dilution method following the Clinical and Laboratory Standards Institute recommendations. Ceftobiprole MICs were also determined with a high inoculum concentration (10(7) CFU/ml) for a subset of five Bla producers belonging to different previously characterized clones by a broth dilution method. Time-kill and synergism studies (with either streptomycin or gentamicin) were performed with two beta-lactamase-producing isolates (TX0630 and TX5070) and two vancomycin-resistant isolates (TX2484 [VanB] and TX2784 [VanA]). The MICs of ceftobiprole for 50 and 90% of the isolates tested were 0.25 and 1 microg/ml, respectively. All Bla producers and vancomycin-resistant isolates were inhibited by concentrations of Ceftobiprole MICs at a high inoculum concentration for a subset of five Bla(+) E. faecalis isolates were ceftobiprole (0.5 microg/ml) and streptomycin (25 microg/ml) was synergistic against Bla(+) TX0630 and TX5070. Ceftobiprole (0.5 microg/ml) plus gentamicin (10 microg/ml) was synergistic against VanB isolate TX2484 and showed enhanced killing, but not synergism, against TX2784 (VanA), despite the absence of high-level resistance to gentamicin. In conclusion, ceftobiprole exhibited good in vitro activity against E. faecalis, including Bla(+) and vancomycin-resistant strains, and exhibited synergism with aminoglycosides against selected isolates.
Li, Yang, E-mail: metalytu@163.com [Department of Materials Science and Engineering, Yantai University, Qingquan Road 32, Yantai 264005 (China); Wang, Zhuo [Department of Materials Science and Engineering, Yantai University, Qingquan Road 32, Yantai 264005 (China); Wang, Liang [Department of Materials Science and Engineering, Dalian Maritime University, Linghai Road 1, Dalian 116026 (China)
2014-04-01
Graphical abstract: - Highlights: • The 8 μm nitrided layer was produced on the surface of AISI 316L stainless steel by plasma nitrided at high temperatures (540 °C) within 1 h. • The nitrided layer consisted of nitrogen expanded austenite and possibly a small amount of free-CrN and iron nitrides. • It could critically reduce processing time compared with low temperature nitriding. • High temperature plasma nitriding could improve pitting corrosion resistance of the substrate in 3.5% NaCl solution. - Abstract: It has generally been believed that the formation of the S phase or expanded austenite γ{sub N} with enough thickness depends on the temperature (lower than 480 °C) and duration of the process. In this work, we attempt to produce nitrogen expanded austenite layer at high temperature in short time. Nitriding of AISI 316L austenitic stainless steel was carried out at high temperatures (>520 °C) for times ranging from 5 to 120 min. The microstructures, chemical composition, the thickness and the morphology of the nitrided layer, as well as its surface hardness, were investigated using X-ray diffraction, X-ray photoelectron spectroscopy, optical microscopy, scanning electron microscopy, and microhardness tester. The corrosion properties of the untreated and nitrided samples were evaluated using anodic polarization tests in 3.5% NaCl solution. The results confirmed that nitrided layer was shown to consist of γ{sub N} and a small amount of free-CrN and iron nitrides. High temperature plasma nitriding not only increased the surface hardness but also improved the corrosion resistance of the austenitic stainless steel, and it can critically reduce processing time compared with low temperature nitriding.
Ambrose, J D; Drost, M; Monson, R L; Rutledge, J J; Leibfried-Rutledge, M L; Thatcher, M J; Kassa, T; Binelli, M; Hansen, P J; Chenoweth, P J; Thatcher, W W
1999-11-01
Our objective was to determine whether pregnancy rates in heat-stressed dairy cattle could be enhanced by timed embryo transfer of fresh (nonfrozen) or frozen-thawed in vitro-derived embryos compared to timed insemination. Ovulation in Holstein cows was synchronized by a GnRH injection followed 7 d later by PGF2 alpha and a second treatment with GnRH 48 h later. Control cows (n = 129) were inseminated 16 h (d 0) after the second GnRH injection. On d 7, a fresh (n = 133) or frozen-thawed (n = 142) in vitro-derived embryo was transferred to cows assigned for timed embryo transfer after categorizing the corpus luteum by palpation per rectum as 3 (excellent), 2 (good or fair), 1 (poor), and 0 (nonpalpable). Response to the synchronization treatment, determined by plasma progesterone concentration (ng/ml) or = 2.0 on d 7, was 76.2%. Mean plasma progesterone concentration on d 7 increased as the quality of corpus luteum improved from category 0 to 3. Concentrations of progesterone in plasma were elevated (> or = 2.0 ng/ml) at 21 d in 64.7 (fresh embryo), 40.3 (frozen embryo), and 41.4 +/- 0.1% (timed insemination) of cows, respectively. Cows that received a fresh embryo had a greater pregnancy rate at 45 to 52 d than did cows that received a frozen-thawed embryo or timed insemination (14.3 > 4.8, 4.9 +/- 2.3%). Body condition (d 0) of cows influenced the pregnancy rate and plasma progesterone concentrations. In summary, timed embryo transfer with fresh in vitro-produced embryos in heat-stressed dairy cattle improved pregnancy rate relative to timed insemination.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Al-Katanani, Y M; Drost, M; Monson, R L; Rutledge, J J; Krininger, C E; Block, J; Thatcher, W W; Hanse, P J
2002-07-01
Timed embryo transfer (TET) using in vitro produced (IVP) embryos without estrus detection can be used to reduce adverse effects of heat stress on fertility. One limitation is the poor survival of IVP embryos after cryopreservation. Objectives of this study were to confirm beneficial effects of TET on pregnancy rate during heat stress as compared to timed artificial insemination (TAI), and to determine if cryopreservation by vitrification could improve survival of IVP embryos transferred to dairy cattle under heat stress conditions. For vitrified embryos (TET-V), a three-step pre-equilibration procedure was used to vitrify excellent and good quality Day 7 IVP Holstein blastocysts. For fresh IVP embryos (TET-F), Holstein oocytes were matured and fertilized; resultant embryos were cultured in modified KSOM for 7 days using the same method as for production of vitrified embryos. Excellent and good quality blastocysts on Day 7 were transported to the cooperating dairy in a portable incubator. Nonpregnant, lactating Holsteins (n = 155) were treated with GnRH (100 microg, i.m., Day 0), followed 7 days later by prostaglandin F2alpha (PGF2alpha, 25 mg, i.m.) and GnRH (100 microg) on Day 9. Cows in the TAI treatment (n = 68) were inseminated the next day (Day 10) with semen from a single bull that also was used to produce embryos. Cows in the other treatments (n = 33 for TET-F; n = 54 for TET-V) received an embryo on Day 17 (i.e. Day 7 after anticipated ovulation and Day 8 after second GnRH treatment). The proportion of cows that responded to synchronization based on plasma progesterone concentrations on Day 10 and Day 17 was 67.7%. Pregnancy rate for all cows on Day 45 was higher (P cows responding to synchronization, pregnancy rate was also higher (P cows producing more milk had lower (P cows producing less milk. In conclusion, ET of fresh IVP embryos can improve pregnancy rate under heat stress conditions, but pregnancy rate following transfer of vitrified embryos was no
Rodríguez-Rodríguez, Laura; Jiménez-Sánchez, Montserrat; Domínguez-Cuesta, María José; Rinterknecht, Vincent; Pallàs, Raimon; Aumaître, Georges; Bourlès, Didier L.; Keddadouche, Karim; Aster Team
2017-09-01
The Last Glacial Termination led to major changes in ice sheet coverage that disrupted global patterns of atmosphere and ocean circulation. Paleoclimate records from Iberia suggest that westerly episodes played a key role in driving heterogeneous climate in the North Atlantic Region. We used 10Be Cosmic Ray Exposure (CRE) dating to explore the glacier response of small mountain glaciers (ca. 5 km2) that developed on the northern slope of the Cantabrian Mountains (Iberian Peninsula), an area directly under the influence of the Atlantic westerly winds. We analyzed twenty boulders from three moraines and one rock glacier arranged as a recessional sequence preserved between 1150 and 1540 m above sea level (a.s.l.) in the Monasterio valley (Redes Natural Park). Results complement previous chronologic data based on radiocarbon and optically stimulated luminescence from the Monasterio valley, which suggest a local Glacial Maximum (local GM) prior to 33 ka BP and a long-standing glacier advance at 24 ka coeval to the global Last Glacial Maximum (LGM). Resultant 10Be CRE ages suggest a progressive retreat and thinning of the Monasterio glacier over the time interval 18.1-16.7 ka. This response is coeval with the Heinrich Stadial 1, an extremely cold and dry climate episode initiated by a weakening of the Atlantic Meridional Overturning Circulation (AMOC). Glacier recession continued through the Bølling/Allerød period as indicate the minimum exposure ages obtained from a cirque moraine and a rock glacier nested within this moraine, which yielded ages of 14.0 and 13.0 ka, respectively. Together, they suggest that the Monasterio glacier experienced a gradual transition from glacier to rock glacier activity as the AMOC started to strengthen again. Glacial evidence ascribable to the Younger Dryas cooling was not dated in the Monasterio valley, but might have occurred at higher elevations than evidence dated in this work. The evolution of former glaciers documented in the
Elhussain, O. A.; Abdel-Magid, T. I. M.
2016-08-01
Mono-Crystalline solar cell module is experimentally conducted in Khartoum, Sudan to study the difference between maximum empirical value of peak Watt and maximum value of thermal power produced in field under highly sufficient solar conditions. Field measurements are recorded for incident solar radiation, produced voltage, current and temperature at several time intervals during sun shine period. The thermal power system has been calculated using fundamental principles of heat transfer. The study shows that solar power for considered module could not attain the empirical peak power irrespective to maximum value of direct incident solar radiation and maximum temperature gained. A loss of about 6% of power can be considered as the difference between field measurements and the manufacturer's indicated empirical value. Solar cell exhibits 94% efficiency in comparison with manufacturer's provided data, and is 3'% more efficient in thermal energy production than in electrical power extraction for hot-dry climate conditions.
Yuhan Rao
2015-06-01
Full Text Available Due to technical limitations, it is impossible to have high resolution in both spatial and temporal dimensions for current NDVI datasets. Therefore, several methods are developed to produce high resolution (spatial and temporal NDVI time-series datasets, which face some limitations including high computation loads and unreasonable assumptions. In this study, an unmixing-based method, NDVI Linear Mixing Growth Model (NDVI-LMGM, is proposed to achieve the goal of accurately and efficiently blending MODIS NDVI time-series data and multi-temporal Landsat TM/ETM+ images. This method firstly unmixes the NDVI temporal changes in MODIS time-series to different land cover types and then uses unmixed NDVI temporal changes to predict Landsat-like NDVI dataset. The test over a forest site shows high accuracy (average difference: −0.0070; average absolute difference: 0.0228; and average absolute relative difference: 4.02% and computation efficiency of NDVI-LMGM (31 seconds using a personal computer. Experiments over more complex landscape and long-term time-series demonstrated that NDVI-LMGM performs well in each stage of vegetation growing season and is robust in regions with contrasting spatial and spatial variations. Comparisons between NDVI-LMGM and current methods (i.e., Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM, Enhanced STARFM (ESTARFM and Weighted Linear Model (WLM show that NDVI-LMGM is more accurate and efficient than current methods. The proposed method will benefit land surface process research, which requires a dense NDVI time-series dataset with high spatial resolution.
Rucci, A.; Vasco, D.W.; Novali, F.
2010-04-01
Deformation in the overburden proves useful in deducing spatial and temporal changes in the volume of a producing reservoir. Based upon these changes we estimate diffusive travel times associated with the transient flow due to production, and then, as the solution of a linear inverse problem, the effective permeability of the reservoir. An advantage an approach based upon travel times, as opposed to one based upon the amplitude of surface deformation, is that it is much less sensitive to the exact geomechanical properties of the reservoir and overburden. Inequalities constrain the inversion, under the assumption that the fluid production only results in pore volume decreases within the reservoir. We apply the formulation to satellite-based estimates of deformation in the material overlying a thin gas production zone at the Krechba field in Algeria. The peak displacement after three years of gas production is approximately 0.5 cm, overlying the eastern margin of the anticlinal structure defining the gas field. Using data from 15 irregularly-spaced images of range change, we calculate the diffusive travel times associated with the startup of a gas production well. The inequality constraints are incorporated into the estimates of model parameter resolution and covariance, improving the resolution by roughly 30 to 40%.
METHOD FOR DETERMINING THE MAXIMUM ARRANGEMENT FACTOR OF FOOTWEAR PARTS
DRIŞCU Mariana
2014-05-01
Full Text Available By classic methodology, designing footwear is a very complex and laborious activity. That is because classic methodology requires many graphic executions using manual means, which consume a lot of the producer’s time. Moreover, the results of this classical methodology may contain many inaccuracies with the most unpleasant consequences for the footwear producer. Thus, the costumer that buys a footwear product by taking into consideration the characteristics written on the product (size, width can notice after a period that the product has flaws because of the inadequate design. In order to avoid this kind of situations, the strictest scientific criteria must be followed when one designs a footwear product. The decisive step in this way has been made some time ago, when, as a result of powerful technical development and massive implementation of electronical calculus systems and informatics, This paper presents a product software for determining all possible arrangements of a footwear product’s reference points, in order to automatically acquire the maximum arrangement factor. The user multiplies the pattern in order to find the economic arrangement for the reference points. In this purpose, the user must probe few arrangement variants, in the translation and rotate-translation system. The same process is used in establishing the arrangement factor for the two points of reference of the designed footwear product. After probing several variants of arrangement in the translation and rotation and translation systems, the maximum arrangement factors are chosen. This allows the user to estimate the material wastes.
Maximum hydrogen production from genetically modified microalgae biomass
Vargas, Jose; Kava, Vanessa; Ordonez, Juan
A transient mathematical model for managing microalgae derived H2 production as a source of renewable energy is developed for a well stirred photobioreactor, PBR. The model allows for the determination of microalgae and H2 mass fractions produced by the PBR in time. A Michaelis-Menten expression is proposed for modeling the rate of H2 production, which introduces an expression to calculate the resulting effect on H2 production rate after genetically modifying the microalgae. The indirect biophotolysis process was used. Therefore, an opportunity was found to optimize the aerobic to anaerobic stages time ratio of the cycle for maximum H2 production rate, i.e., the process rhythm. A system thermodynamic optimization is conducted with the model equations to find accurately the optimal system operating rhythm for maximum H2 production rate, and how wild and genetically modified species compare to each other. The maxima found are sharp, showing up to a ~60% variation in hydrogen production rate within 2 days around the optimal rhythm, which highlights the importance of system operation in such condition. Therefore, the model is expected to be useful for design, control and optimization of H2 production. Brazilian National Council of Scientific and Technological Development, CNPq (project 482336/2012-9).
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Adare, A; Aidala, C; Ajitanand, N N; Akiba, Y; Akimoto, R; Al-Bataineh, H; Al-Ta'ani, H; Alexander, J; Alfred, M; Angerami, A; Aoki, K; Apadula, N; Aphecetche, L; Aramaki, Y; Armendariz, R; Aronson, S H; Asai, J; Asano, H; Aschenauer, E C; Atomssa, E T; Averbeck, R; Awes, T C; Azmoun, B; Babintsev, V; Bai, M; Baksay, G; Baksay, L; Baldisseri, A; Bandara, N S; Bannier, B; Barish, K N; Barnes, P D; Bassalleck, B; Basye, A T; Bathe, S; Batsouli, S; Baublis, V; Baumann, C; Baumgart, S; Bazilevsky, A; Beaumier, M; Beckman, S; Belikov, S; Belmont, R; Bennett, R; Berdnikov, A; Berdnikov, Y; Bickley, A A; Bing, X; Black, D; Blau, D S; Boissevain, J G; Bok, J S; Borel, H; Boyle, K; Brooks, M L; Bryslawskyj, J; Buesching, H; Bumazhnov, V; Bunce, G; Butsyk, S; Camacho, C M; Campbell, S; Castera, P; Chang, B S; Charvet, J -L; Chen, C -H; Chernichenko, S; Chi, C Y; Chiba, J; Chiu, M; Choi, I J; Choi, J B; Choi, S; Choudhury, R K; Christiansen, P; Chujo, T; Chung, P; Churyn, A; Chvala, O; Cianciolo, V; Citron, Z; Cleven, C R; Cole, B A; Comets, M P; Connors, M; Constantin, P; Csanád, M; Csörgő, T; Dahms, T; Dairaku, S; Danchev, I; Das, K; Datta, A; Daugherity, M S; David, G; Deaton, M B; DeBlasio, K; Dehmelt, K; Delagrange, H; Denisov, A; d'Enterria, D; Deshpande, A; Desmond, E J; Dharmawardane, K V; Dietzsch, O; Ding, L; Dion, A; Do, J H; Donadelli, M; Drapier, O; Drees, A; Drees, K A; Dubey, A K; Durham, J M; Durum, A; Dutta, D; Dzhordzhadze, V; D'Orazio, L; Edwards, S; Efremenko, Y V; Egdemir, J; Ellinghaus, F; Emam, W S; Engelmore, T; Enokizono, A; En'yo, H; Esumi, S; Eyser, K O; Fadem, B; Feege, N; Fields, D E; Finger, M; Jr., \\,; Fleuret, F; Fokin, S L; Fraenkel, Z; Frantz, J E; Franz, A; Frawley, A D; Fujiwara, K; Fukao, Y; Fusayasu, T; Gadrat, S; Gainey, K; Gal, C; Gallus, P; Garg, P; Garishvili, A; Garishvili, I; Ge, H; Giordano, F; Glenn, A; Gong, H; Gong, X; Gonin, M; Gosset, J; Goto, Y; de Cassagnac, R Granier; Grau, N; Greene, S V; Perdekamp, M Grosse; Gu, Y; Gunji, T; Guo, L; Guragain, H; Gustafsson, H -Å; Hachiya, T; Henni, A Hadj; Haegemann, C; Haggerty, J S; Hahn, K I; Hamagaki, H; Hamblen, J; Han, R; Han, S Y; Hanks, J; Harada, H; Hartouni, E P; Haruna, K; Hasegawa, S; Hashimoto, K; Haslum, E; Hayano, R; He, X; Heffner, M; Hemmick, T K; Hester, T; Hiejima, H; Hill, J C; Hobbs, R; Hohlmann, M; Hollis, R S; Holzmann, W; Homma, K; Hong, B; Horaguchi, T; Hori, Y; Hornback, D; Hoshino, T; Huang, J; Huang, S; Ichihara, T; Ichimiya, R; Ide, J; Iinuma, H; Ikeda, Y; Imai, K; Imazu, Y; Imrek, J; Inaba, M; Inoue, Y; Iordanova, A; Isenhower, D; Isenhower, L; Ishihara, M; Isobe, T; Issah, M; Isupov, A; Ivanischev, D; Ivanishchev, D; Jacak, B V; Javani, M; Jeon, S J; Jezghani, M; Jia, J; Jiang, X; Jin, J; Jinnouchi, O; Johnson, B M; Joo, E; Joo, K S; Jouan, D; Jumper, D S; Kajihara, F; Kametani, S; Kamihara, N; Kamin, J; Kaneta, M; Kaneti, S; Kang, B H; Kang, J H; Kang, J S; Kanou, H; Kapustinsky, J; Karatsu, K; Kasai, M; Kawall, D; Kawashima, M; Kazantsev, A V; Kempel, T; Key, J A; Khachatryan, V; Khanzadeev, A; Kihara, K; Kijima, K M; Kikuchi, J; Kim, B I; Kim, C; Kim, D H; Kim, D J; Kim, E; Kim, E -J; Kim, H -J; Kim, H J; Kim, K -B; Kim, M; Kim, S H; Kim, Y -J; Kim, Y K; Kinney, E; Kiriluk, K; Kiss, Á; Kistenev, E; Kiyomichi, A; Klatsky, J; Klay, J; Klein-Boesing, C; Kleinjan, D; Kline, P; Koblesky, T; Kochenda, L; Kochetkov, V; Kofarago, M; Komatsu, Y; Komkov, B; Konno, M; Koster, J; Kotchetkov, D; Kotov, D; Kozlov, A; Král, A; Kravitz, A; Krizek, F; Kubart, J; Kunde, G J; Kurihara, N; Kurita, K; Kurosawa, M; Kweon, M J; Kwon, Y; Kyle, G S; Lacey, R; Lai, Y S; Lajoie, J G; Lebedev, A; Lee, B; Lee, D M; Lee, J; Lee, K; Lee, K B; Lee, K S; Lee, M K; Lee, S H; Lee, S R; Lee, T; Leitch, M J; Leite, M A L; Leitgab, M; Leitner, E; Lenzi, B; Lewis, B; Li, X; Liebing, P; Lim, S H; Levy, L A Linden; Liška, T; Litvinenko, A; Liu, H; Liu, M X; Love, B; Luechtenborg, R; Lynch, D; Maguire, C F; Makdisi, Y I; Makek, M; Malakhov, A; Malik, M D; Manion, A; Manko, V I; Mannel, E; Mao, Y; Mašek, L; Masui, H; Masumoto, S; Matathias, F; McCumber, M; McGaughey, P L; McGlinchey, D; McKinney, C; Means, N; Meles, A; Mendoza, M; Meredith, B; Miake, Y; Mibe, T; Mignerey, A C; Mikeš, P; Miki, K; Miller, A J; Miller, T E; Milov, A; Mioduszewski, S; Mishra, D K; Mishra, M; Mitchell, J T; Mitrovski, M; Miyachi, Y; Miyasaka, S; Mizuno, S; Mohanty, A K; Montuenga, P; Moon, H J; Moon, T; Morino, Y; Morreale, A; Morrison, D P; Motschwiller, S; Moukhanova, T V; Mukhopadhyay, D; Murakami, T; Murata, J; Mwai, A; Nagae, T; Nagamiya, S; Nagata, Y; Nagle, J L; Naglis, M; Nagy, M I; Nakagawa, I; Nakagomi, H; Nakamiya, Y; Nakamura, K R; Nakamura, T; Nakano, K; Nattrass, C; Nederlof, A; Netrakanti, P K; Newby, J; Nguyen, M; Nihashi, M; Niida, T; Norman, B E; Nouicer, R; Novitzky, N; Nyanin, A S; O'Brien, E; Oda, S X; Ogilvie, C A; Ohnishi, H; Oka, M; Okada, K
2014-01-01
Two-pion interferometry measurements are used to extract the Gaussian radii $R_{{\\rm out}}$, $R_{{\\rm side}}$, and $R_{{\\rm long}}$, of the pion emission sources produced in Cu$+$Cu and Au$+$Au collisions at several beam collision energies $\\sqrt{s_{_{NN}}}$ at PHENIX. The extracted radii, which are compared to recent STAR and ALICE data, show characteristic scaling patterns as a function of the initial transverse size $\\bar{R}$ of the collision systems and the transverse mass $m_T$ of the emitted pion pairs, consistent with hydrodynamiclike expansion. Specific combinations of the three-dimensional radii that are sensitive to the medium expansion velocity and lifetime, and the pion emission time duration show nonmonotonic $\\sqrt{s_{_{NN}}}$ dependencies. The nonmonotonic behaviors exhibited by these quantities point to a softening of the equation of state that may coincide with the critical end point in the phase diagram for nuclear matter.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
F. Yasmeen
2009-10-01
Full Text Available Aqueous-phase oligomer formation from methylglyoxal, a major atmospheric photooxidation product, has been investigated in a simulated cloud matrix under dark conditions. The aim of this study was to explore an additional path producing secondary organic aerosol (SOA through cloud processes without photochemistry during night-time. Indeed, atmospheric models still underestimate SOA formation, as field measurements have revealed more SOA than predicted. Soluble oligomers (n=1–8 formed in the course of acid-catalyzed aldol condensation and acid-catalyzed hydration followed by acetal formation have been detected and characterized by positive and negative ion electrospray ionization mass spectrometry. Aldol condensation proved to be a favorable mechanism under simulated cloud conditions, while hydration/acetal formation was found to strongly depend on the pH of the system. The aldol oligomer series starts with a β-hydroxy ketone via aldol condensation, where oligomers are formed by multiple additions of C_{3}H_{4}O_{2} units (72 Da to the parent β-hydroxy ketone. Ion trap mass spectrometry experiments were performed to structurally characterize the major oligomer species. A mechanistic pathway for the growth of oligomers under cloud conditions and in the absence of UV-light and OH radicals, which could substantially enhance in-cloud SOA yields, is proposed here for the first time.
S.Bollanti; P.Di Lazzaro; F.Flora; L.Mezi; D.Murra; A.Torre
2015-01-01
A discharge-produced-plasma(DPP) source emitting in the extreme ultraviolet(EUV) spectral region is running at the ENEA Frascati Research Centre. The plasma is generated in low-pressure xenon gas and efficiently emits 100-ns duration radiation pulses in the 10–20-nm wavelength range, with an energy of 20 m J/shot/sr at a 10-Hz repetition rate. The complex discharge evolution is constantly examined and controlled with electrical measurements, while a nsgated CCD camera allowed observation of the discharge development in the visible, detection of time-resolved plasmacolumn pinching, and optimization of the pre-ionization timing. Accurately calibrated Zr-filtered PIN diodes are used to monitor the temporal behaviour and energy emission of the EUV pulses, while the calibration of a dosimetric film allows quantitative imaging of the emitted radiation. This comprehensive plasma diagnostics has demonstrated its effectiveness in suitably adjusting the source configuration for several applications, such as exposures of photonic materials and innovative photoresists.
Ira Desri Rahmi
2013-01-01
Full Text Available PFAD is potential enough to be developed as a biodiesel feedstock because it is cheap and always available. But the FFA in PFAD is so high, so esterification process is necessary to lower levels of the FFA. High levels of FFA PFAD and a solid form at room temperature need necessary studies to determine the needs of methanol as a reactant and reaction time esterification for high yield and characteristics of biodiesel. Esterification of PFAD were carried out to study the effect of : molar ratios of methanol to PFAD of 6 : 1 – 10 : 1 and reaction time of 60 – 120 min. The optimum condition for esterification process was molar ratio of methanol to PFAD at 10 : 1 with 2 wt% of H2SO4 at 60˚C and 60 min. The amount of FFA was reduced from 97,17 wt% to less then 10 wt %, at the end of esterification process. The characteristic of biodiesel produced were 9.36 mg KOH/gr acid value, 5.01% FFA, 244 mg KOH/g saponification, 47.94 mg I/gr iodine value, 890 kg/m3 density, and negative result for determination water and sediment content.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
Molenaar, P.C.M.; Nesselroade, J.R.
1998-01-01
The study of intraindividual variability pervades empirical inquiry in virtually all subdisciplines of psychology. The statistical analysis of multivariate time-series data - a central product of intraindividual investigations - requires special modeling techniques. The dynamic factor model (DFM), w
Singh, Prashant; Mustapha, Azlin
2015-12-23
Shiga toxin-producing Escherichia coli (STEC) are pathogenic strains of E. coli that can cause bloody diarrhea and kidney failure. Seven STEC serogroups, O157, O26, O45, O103, O111, O121 and O145 are responsible for more than 71% of the total infections caused by this group of pathogens. All seven serogroups are currently considered as adulterants in non-intact beef products in the U.S. In this study, two multiplex melt curve real-time PCR assays with internal amplification controls (IACs) were standardized for the detection of eight STEC serogroups. The first multiplex assay targeted E. coli serogroups O145, O121, O104, and O157; while the second set detected E. coli serogroups O26, O45, O103 and O111. The applicability of the assays was tested using 11 different meat and produce samples. For food samples spiked with a cocktail of four STEC serogroups with a combined count of 10 CFU/25 g food, all targets of the multiplex assays were detected after an enrichment period of 6h. The assays also worked efficiently when 325 g of food samples were spiked with 10 CFU of STECs. The assays are not dependent on fluorescent-labeled probes or immunomagnetic beads, and can be used for the detection of eight STEC serogroups in less than 11h. Routine preliminary screening of STECs in food samples is performed by testing for the presence of STEC virulence genes. The assays developed in this study can be useful as a first- or second-tier test for the identification of the eight O serogroup-specific genes in suspected food samples.
Hara-Kudo, Yukiko; Konishi, Noriko; Ohtsuka, Kayoko; Iwabuchi, Kaori; Kikuchi, Rie; Isobe, Junko; Yamazaki, Takumiko; Suzuki, Fumie; Nagai, Yuhki; Yamada, Hiroko; Tanouchi, Atsuko; Mori, Tetsuya; Nakagawa, Hiroshi; Ueda, Yasufumi; Terajima, Jun
2016-08-02
To establish an efficient detection method for Shiga toxin (Stx)-producing Escherichia coli (STEC) O26, O103, O111, O121, O145, and O157 in food, an interlaboratory study using all the serogroups of detection targets was firstly conducted. We employed a series of tests including enrichment, real-time PCR assays, and concentration by immunomagnetic separation, followed by plating onto selective agar media (IMS-plating methods). This study was particularly focused on the efficiencies of real-time PCR assays in detecting stx and O-antigen genes of the six serogroups and of IMS-plating methods onto selective agar media including chromogenic agar. Ground beef and radish sprouts samples were inoculated with the six STEC serogroups either at 4-6CFU/25g (low levels) or at 22-29CFU/25g (high levels). The sensitivity of stx detection in ground beef at both levels of inoculation with all six STEC serogroups was 100%. The sensitivity of stx detection was also 100% in radish sprouts at high levels of inoculation with all six STEC serogroups, and 66.7%-91.7% at low levels of inoculation. The sensitivity of detection of O-antigen genes was 100% in both ground beef and radish sprouts at high inoculation levels, while at low inoculation levels, it was 95.8%-100% in ground beef and 66.7%-91.7% in radish sprouts. The sensitivity of detection with IMS-plating was either the same or lower than those of the real-time PCR assays targeting stx and O-antigen genes. The relationship between the results of IMS-plating methods and Ct values of real-time PCR assays were firstly analyzed in detail. Ct values in most samples that tested negative in the IMS-plating method were higher than the maximum Ct values in samples that tested positive in the IMS-plating method. This study indicates that all six STEC serogroups in food contaminated with more than 29CFU/25g were detected by real-time PCR assays targeting stx and O-antigen genes and IMS-plating onto selective agar media. Therefore, screening
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Westhoff, M.; Erpicum, S.; Archambeau, P.; Pirotton, M.; Zehe, E.; Dewals, B.
2015-12-01
Power can be performed by a system driven by a potential difference. From a given potential difference, the power that can be subtracted is constraint by the Carnot limit, which follows from the first and second laws of thermodynamics. If the system is such that the flux producing power (with power being the flux times its driving potential difference) also influences the potential difference, a maximum in power can be obtained as a result of the trade-off between the flux and the potential difference. This is referred to as the maximum power principle. It has already been shown that the atmosphere operates close to this maximum power limit when it comes to heat transport from the Equator to the poles, or vertically, from the surface to the atmospheric boundary layer. To reach this state of maximum power, the effective thermal conductivity of the atmosphere is adapted by the creation of convection cells. The aim of this study is to test if the soil's effective hydraulic conductivity also adapts in such a way that it produces maximum power. However, the soil's hydraulic conductivity adapts differently; for example by the creation of preferential flow paths. Here, this process is simulated in a lab experiment, which focuses on preferential flow paths created by piping. In the lab, we created a hydrological analogue to the atmospheric model dealing with heat transport between Equator and poles, with the aim to test if the effective hydraulic conductivity of the sand bed can be predicted with the maximum power principle. The experimental setup consists of two freely draining reservoir connected with each other by a confined aquifer. By adding water to only one reservoir, a potential difference will build up until a steady state is reached. The results will indicate whether the maximum power principle does apply for groundwater flow and how it should be applied. Because of the different way of adaptation of flow conductivity, the results differ from that of the
Maximum Velocities in Flexion and Extension Actions for Sport
Jessop David M.
2016-04-01
Full Text Available Speed of movement is fundamental to the outcome of many human actions. A variety of techniques can be implemented in order to maximise movement speed depending on the goal of the movement, constraints, and the time available. Knowing maximum movement velocities is therefore useful for developing movement strategies but also as input into muscle models. The aim of this study was to determine maximum flexion and extension velocities about the major joints in upper and lower limbs. Seven university to international level male competitors performed flexion/extension at each of the major joints in the upper and lower limbs under three conditions: isolated; isolated with a countermovement; involvement of proximal segments. 500 Hz planar high speed video was used to calculate velocities. The highest angular velocities in the upper and lower limb were 50.0 rad·s-1 and 28.4 rad·s-1, at the wrist and knee, respectively. As was true for most joints, these were achieved with the involvement of proximal segments, however, ANOVA analysis showed few significant differences (p<0.05 between conditions. Different segment masses, structures and locations produced differing results, in the upper and lower limbs, highlighting the requirement of segment specific strategies for maximal movements.
MODEL PREDICTIVE CONTROL FOR PHOTOVOLTAIC STATION MAXIMUM POWER POINT TRACKING SYSTEM
I. Elzein
2015-01-01
Full Text Available The purpose of this paper is to present an alternative maximum power point tracking, MPPT, algorithm for a photovoltaic module, PVM, to produce the maximum power, Pmax, using the optimal duty ratio, D, for different types of converters and load matching.We present a state-based approach to the design of the maximum power point tracker for a stand-alone photovoltaic power generation system. The system under consideration consists of a solar array with nonlinear time-varying characteristics, a step-up converter with appropriate filter.The proposed algorithm has the advantages of maximizing the efficiency of the power utilization, can be integrated to other MPPT algorithms without affecting the PVM performance, is excellent for Real-Time applications and is a robust analytical method, different from the traditional MPPT algorithms which are more based on trial and error, or comparisons between present and past states. The procedure to calculate the optimal duty ratio for a buck, boost and buck-boost converters, to transfer the maximum power from a PVM to a load, is presented in the paper. Additionally, the existence and uniqueness of optimal internal impedance, to transfer the maximum power from a photovoltaic module using load matching, is proved.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Taghvaei, A.H. [Department of Materials Science and Engineering, Shiraz University of Technology, Shiraz (Iran, Islamic Republic of); Ghajari, F., E-mail: fati.ghajari@gmail.com [Department of Materials Science and Engineering, Shiraz University, Shiraz (Iran, Islamic Republic of); Markó, D. [IFW Dresden, Institute for Complex Materials, Helmholtzstr. 20, 01069 Dresden (Germany); Prashanth, K.G. [IFW Dresden, Institute for Complex Materials, Helmholtzstr. 20, 01069 Dresden (Germany); Additive manufacturing Center, Sandvik AB, 81181 Sandviken (Sweden)
2015-12-01
Fe{sub 80}P{sub 11}C{sub 9} alloy with amorphous/nanocrytalline microstructure has been synthesized by mechanical alloying of the elemental powders. The microstructure, thermal behavior and morphology of the produced powders have been studied by X-ray diffraction (XRD), differential scanning calorimetry (DSC) and scanning electron microscopy (SEM), respectively. The crystallite size, lattice strain and fraction of the amorphous phase have been calculated by Rietveld refinement method. The results indicate that the powders microstructure consists of α-Fe(P,C) nanocrystals with an average diameter of 9 nm±1 nm dispersed in the amorphous matrix after 90 h of milling. Moreover, the fraction of amorphous phase initially increases up to 90 h of milling and then decreases after 120 h of milling, as a result of mechanical crystallization and formation of Fe{sub 2}P phase. The magnetic measurements show that while the saturation magnetization decreases continuously with the milling time, the coercivity exhibits a complicated trend. The correlation between microstructural changes and magnetic properties has been discussed in detail. - Highlights: • Glass formation was investigated in Fe{sub 80}P{sub 11}C{sub 9} by mechanical alloying. • Structural parameters were calculated by Rietveld refinement method. • Milling first increased and then decreased the fraction of amorphous phase. • Magnetic properties were significantly changed upon milling.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Mandagará, Pedro
2008-01-01
Resenha de MENDES, Victor K.; ROCHA, João Cezar de Castro (Eds.). Producing Presences: branching out from Gumbrecht’s work. Dartmouth, Massachusetts: University of Massachusetts Dartmouth, 2007. (Adamastor book series, 2)
Gonzalez-Ayala, Julian; Calvo Hernández, A.; Roco, J. M. M.
2016-07-01
The main unified energetic properties of low dissipation heat engines and refrigerator engines allow for both endoreversible or irreversible configurations. This is accomplished by means of the constraints imposed on the characteristic global operation time or the contact times between the working system with the external heat baths and modulated by the dissipation symmetries. A suited unified figure of merit (which becomes power output for heat engines) is analyzed and the influence of the symmetries on the optimum performance discussed. The obtained results, independent on any heat transfer law, are faced with those obtained from Carnot-like heat models where specific heat transfer laws are needed. Thus, it is shown that only the inverse phenomenological law, often used in linear irreversible thermodynamics, correctly reproduces all optimized values for both the efficiency and coefficient of performance values.
Distribution of maximum loss of fractional Brownian motion with drift
Çağlar, Mine; Vardar-Acar, Ceren
2013-01-01
In this paper, we find bounds on the distribution of the maximum loss of fractional Brownian motion with H >= 1/2 and derive estimates on its tail probability. Asymptotically, the tail of the distribution of maximum loss over [0, t] behaves like the tail of the marginal distribution at time t.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Brusa, Victoria; Galli, Lucía; Linares, Luciano H; Ortega, Emanuel E; Lirón, Juan P; Leotta, Gerardo A
2015-12-01
Shiga toxin-producing Escherichia coli (STEC) are recognized as food-borne pathogens. We developed and validated two SYBR green PCR (SYBR-PCR) and a real-time multiplex PCR (RT-PCR) to detect stx1 and stx2 genes in meat samples, and compared these techniques in ground beef samples from retail stores. One set of primers and one hydrolysis probe were designed for each stx gene. For RT-PCR, an internal amplification control (IAC) was used. All PCR intra-laboratory validations were performed using pure strains and artificially contaminated ground beef samples. A total of 50 STEC and 30 non-STEC strains were used. Naturally contaminated ground beef samples (n=103) were obtained from retail stores and screened with SYBR-PCR and RT-PCR, and stx-positive samples were processed for STEC isolation. In the intra-laboratory validation, each PCR obtained a 1×10(2) CFU mL(-1) limit of detection and 100% inclusivity and exclusivity. The same results were obtained when different laboratory analysts in alternate days performed the assay. The level of agreement obtained with SYBR-PCR and RT-PCR was kappa=0.758 and 0.801 (P<0.001) for stx1 and stx2 gene detection, respectively. Two PCR strategies were developed and validated, and excellent performance with artificially contaminated ground beef samples was obtained. However, the efforts made to isolate STEC from retail store samples were not enough. Only 11 STEC strains were isolated from 35 stx-positive ground beef samples identically detected by all PCRs. The combination of molecular approaches based on the identification of a virulence genotypic profile of STEC must be considered to improve isolation.
Rothrock, M J; Cook, K L; Lovanh, N; Warren, J G; Sistani, K
2008-06-01
Ammonia production in poultry houses has serious implications for flock health and performance, nutrient value of poultry litter, and energy costs for running poultry operations. In poultry litter, the conversion of organic N (uric acid and urea) to NH(4)-N is a microbially mediated process. The urease enzyme is responsible for the final step in the conversion of urea to NH(4)-N. Cloning and analysis of 168 urease sequences from extracted genomic DNA from poultry litter samples revealed the presence of a novel, dominant group of ureolytic microbes (representing 90% of the urease clone library). Specific primers and a probe were designed to target this novel poultry litter urease producer (PLUP) group, and a new quantitative real-time PCR assay was developed. The assay allowed for the detection of 10(2) copies of target urease sequences per PCR reaction (approximately 1 x 10(4) cells per gram of poultry litter), and the reaction was linear over 8 orders of magnitude. Our PLUP group was present only in poultry litter and was not present in environmental samples from diverse agricultural settings. This novel PLUP group represented between 0.1 to 3.1% of the total microbial populations (6.0 x 10(6) to 2.4 x 10(8) PLUP cells per gram of litter) from diverse poultry litter types. The PLUP cell concentrations were directly correlated to the total cell concentrations in the poultry litter and were found to be influenced by the physical parameters of the litters (bedding material, moisture content, pH), as well as the NH(4)-N content of the litters, based on principal component analysis. Chemical parameters (organic N, total N, total C) were not found to be influential in the concentrations of our PLUP group in the diverse poultry litters Future applications of this assay could include determining the efficacy of current NH(4)-N-reducing litter amendments or in designing more efficient treatment protocols.
Pina eFratamico
2012-12-01
Full Text Available Escherichia coli O157:H7 and certain non-O157 Shiga toxin-producing Escherichia coli (STEC serogroups have emerged as important public health threats. The development of methods for rapid and reliable detection of this heterogeneous group of pathogens has been challenging. GeneDisc real-time PCR assays were evaluated for detection of the stx1, stx2, eae, and ehx genes and a gene that identifies the O157 serogroup followed by a second GeneDisc assay targeting serogroup-specific genes of STEC O26, O45, O91, O103, O111, O113, O121, O145, and O157. The ability to detect the STEC serogroups in ground beef samples artificially inoculated at a level of ca. 2-20 CFU/25 g and subjected to enrichment in mTSB or BPW was similar. Following enrichment, all inoculated ground beef samples showed amplification of the correct set of target genes carried by each strain. Samples inoculated with STEC serogroups O26, O45, O103, O111, O121, O145, and O157 were subjected to immunomagnetic separation, and isolation was achieved by plating onto Rainbow agar O157. Colonies were confirmed by PCR assays targeting stx1, stx2, eae, and serogroup-specific genes. Thus, this work demonstrated that GeneDisc assays are rapid, sensitive, and reliable and can be used for screening ground beef and potentially other foods for STEC serogroups that are important food-borne pathogens worldwide.
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Pea, Federico; Della Siega, Paola; Cojutti, Piergiorgio; Sartor, Assunta; Crapis, Massimo; Scarparo, Claudio; Bassetti, Matteo
2017-02-01
The effect of real-time pharmacokinetic/pharmacodynamic (PK/PD) optimisation of high-dose continuous-infusion meropenem on the clinical outcome of patients receiving combination antimicrobial therapy for treatment of KPC-producing Klebsiella pneumoniae (KPC-Kp) infections was retrospectively assessed. Data for all patients with KPC-Kp-related infections who received antimicrobial combination therapy containing high-dose continuous-infusion meropenem optimised by means of therapeutic drug monitoring (TDM) were retrieved. Optimal PK/PD exposure was considered a steady-state concentration to minimum inhibitory concentration ratio (Css/MIC) of 1-4. Univariate binary logistic regression analysis was performed to identify independent predictors of clinical outcome. Among the 30 eligible patients, 53.3% had infections caused by meropenem-resistant KPC-Kp (MIC ≥ 16 mg/L). Tigecycline and colistin were the two antimicrobials most frequently combined with meropenem. Mean doses of continuous-infusion meropenem ranged from 1.7 to 13.2 g/daily. The Css/MIC ratio was ≥1 in 73.3% of cases and ≥4 in 50.0%. Clinical outcome was successful in 73.3% of cases after a median treatment length of 14.0 days. In univariate analysis, a significant correlation with successful clinical outcome was found for a Css/MIC ratio ≥1 (OR = 10.556, 95% CI 1.612-69.122; P = 0.014), a Css/MIC ratio ≥4 (OR = 12.250, 95% CI 1.268-118.361; P = 0.030) and a Charlson co-morbidity index of ≥4 (OR = 0.158, 95% CI 0.025-0.999; P = 0.05). High-dose continuous-infusion meropenem optimised by means of real-time TDM may represent a valuable tool in improving clinical outcome when dealing with the treatment of infections caused by KPC-Kp with a meropenem MIC ≤ 64 mg/L.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Mojtaba Tahmoorespur
2016-04-01
Full Text Available Introduction Butyrivibrio fibrisolvens strains are presently recognized as the major butyrate-producing bacteria found in the rumen and digestive track of many animals and also in the human gut. In this study we reported the development of two DNA based techniques, quantitative competitive (QC PCR and absolute based Real-Time PCR, for enumerating Butyrivibrio fibrisolvens strains. Despite the recent introduction of real-time PCR method for the rapid quantification of the target DNA sequences, use of quantitative competitive PCR (QC-PCR technique continues to play an important role in nucleic acid quantification since it is more cost effective. The procedure relies on the co-amplification of the sequence of interest with a serially diluted synthetic DNA fragment of the known concentration (competitor, using the single set primers. A real-time polymerase chain reaction is a laboratory technique of molecular biology based on the polymerase chain reaction (PCR. It monitors the amplification of a targeted DNA molecule during the PCR. Materials and Methods At first reported species-specific primers targeting the 16S rDNA region of the bacterium Butyrivibrio fibrisolvens were used for amplifying a 213 bp fragment. A DNA competitor differing by 50 bp in length from the 213 bp fragment was constructed and cloned into pTZ57R/T vector. The competitor was quantified by NanoDrop spectrophotometer and serially diluted and co-amplified by PCR with total extracted DNA from rumen fluid samples. PCR products were quantified by photographing agarose gels and analyzed with Image J software and the amount of amplified target DNA was log plotted against the amount of amplified competitor. Coefficient of determination (R2 was used as a criterion of methodology precision. For developing the Real-time PCR technique, the 213 bp fragment was amplified and cloned into pTZ57R/T was used to draw a standard curve. Results and Discussion The specific primers of Butyrivibrio
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Minimum length-maximum velocity
Panes, Boris
2012-03-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example, we can predict the ratio between the minimum lengths in space and time using the results from OPERA on superluminal neutrinos.
Cloke, Jonathan; Crowley, Erin; Bird, Patrick; Bastin, Ben; Flannery, Jonathan; Agin, James; Goins, David; Clark, Dorn; Radcliff, Roy; Wickstrand, Nina; Kauppinen, Mikko
2015-01-01
The Thermo Scientific™ SureTect™ Escherichia coli O157:H7 Assay is a new real-time PCR assay which has been validated through the AOAC Research Institute (RI) Performance Tested Methods(SM) program for raw beef and produce matrixes. This validation study specifically validated the assay with 375 g 1:4 and 1:5 ratios of raw ground beef and raw beef trim in comparison to the U.S. Department of Agriculture, Food Safety Inspection Service, Microbiology Laboratory Guidebook (USDS-FSIS/MLG) reference method and 25 g bagged spinach and fresh apple juice at a ratio of 1:10, in comparison to the reference method detailed in the International Organization for Standardization 16654:2001 reference method. For raw beef matrixes, the validation of both 1:4 and 1:5 allows user flexibility with the enrichment protocol, although which of these two ratios chosen by the laboratory should be based on specific test requirements. All matrixes were analyzed by Thermo Fisher Scientific, Microbiology Division, Vantaa, Finland, and Q Laboratories Inc, Cincinnati, Ohio, in the method developer study. Two of the matrixes (raw ground beef at both 1:4 and 1:5 ratios) and bagged spinach were additionally analyzed in the AOAC-RI controlled independent laboratory study, which was conducted by Marshfield Food Safety, Marshfield, Wisconsin. Using probability of detection statistical analysis, no significant difference was demonstrated by the SureTect kit in comparison to the USDA FSIS reference method for raw beef matrixes, or with the ISO reference method for matrixes of bagged spinach and apple juice. Inclusivity and exclusivity testing was conducted with 58 E. coli O157:H7 and 54 non-E. coli O157:H7 isolates, respectively, which demonstrated that the SureTect assay was able to detect all isolates of E. coli O157:H7 analyzed. In addition, all but one of the nontarget isolates were correctly interpreted as negative by the SureTect Software. The single isolate giving a positive result was an E
Alfredo José Ferreira Melo
2015-10-01
Full Text Available The objective of this study was to evaluate the reproductive indices of different cattle herds submitted to a fixed-time artificial insemination program (FTAI in the region of Piracicaba, SP. Twenty herds composed of 10 to 80 crossbred dairy cows were selected to participate in a breeding program through FTAI. First, a survey was conducted to determine the incidence of reproductive system diseases in the herds. For this purpose, blood samples were collected randomly from each herd for the serological diagnosis of brucellosis, leptospirosis, infectious bovine rhinotracheitis (IBR, bovine viral diarrhea (BVD, enzootic bovine leukosis (EBL, and neosporosis. The laboratory tests were conducted according to the methods of the World Organisation for Animal Health. All herds had at least one animal that tested positive for one or more reproductive system diseases. Brucellosis was detected in 3/20 (15% herds, IBR and BVD in 19/20 (95%, EBL in 20/20 (100%, neosporosis in 13/20 (65%, and tuberculosis in 8/8 (100%. Six months later, cows (n=203 of the different herds were submitted to hormone treatment consisting of estradiol-progesterone and PGF2α for heat synchronization and ovulation and subsequent FTAI. The data were analyzed by logistic regression and Fisher’s exact test. The pregnancy rates at 30 and 60 days after FTAI were 55.7% and 48.3%, respectively. These rates were not influenced by herd, inseminator, body score, post-calving days, or number of lactations. The calving rate (42.4% differed from the pregnancy rate at 30 (P=0.01, but not at 60 (P=0.27 days after FTAI. The gestation loss until calving was 23.2% (26/112, but no exact cause of this event was identified. Despite the presence (seroreactivity of reproductive diseases, cattle herds owned by small-scale producers exhibit acceptable pregnancy rates after FTAI. However, additional prophylactic measures such as vaccination and improvement of livestock management should be adopted.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
A Family of Maximum SNR Filters for Noise Reduction
Huang, Gongping; Benesty, Jacob; Long, Tao;
2014-01-01
This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....
Sequential and Parallel Algorithms for Finding a Maximum Convex Polygon
Fischer, Paul
1997-01-01
such a polygon which is maximal with respect to area can be found in time O(n³ log n). With the same running time one can also find such a polygon which contains a maximum number of positive points. If, in addition, the number of vertices of the polygon is restricted to be at most M, then the running time...
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick
2016-06-01
Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.
Nadhir, Ahmad; Naba, Agus; Hiyama, Takashi
An optimal control for maximizing extraction of power in variable-speed wind energy conversion system is presented. Intelligent gradient detection by fuzzy inference system (FIS) in maximum power point tracking control is proposed to achieve power curve operating near optimal point. Speed rotor reference can be adjusted by maximum power point tracking fuzzy controller (MPPTFC) such that the turbine operates around maximum power. Power curve model can be modelled by using adaptive neuro fuzzy inference system (ANFIS). It is required to simply well estimate just a few number of maximum power points corresponding to optimum generator rotor speed under varying wind speed, implying its training can be done with less effort. Using the trained fuzzy model, some estimated maximum power points as well as their corresponding generator rotor speed and wind speed are determined, from which a linear wind speed feedback controller (LWSFC) capable of producing optimum generator speed can be obtained. Applied to a squirrel-cage induction generator based wind energy conversion system, MPPTFC and LWSFC could maximize extraction of the wind energy, verified by a power coefficient stay at its maximum almost all the time and an actual power line close to a maximum power efficiency line reference.
DNA isolation procedures significantly influence the outcome of PCR-based detection of human pathogens. Unlike clinical samples, DNA isolation from food samples such as fresh and fresh-cut produce has remained a formidable task and has hampered the sensitivity and accuracy of molecular methods. We...
The acoustic signals emitted by the last stage larval instars and adults of Prostephanus truncatus and Sitophilus zeamais in stored maize were investigated. Analyses were performed to identify brief, 1-10-ms broadband sound impulses of five different frequency patterns produced by larvae and adults,...
Duffell, Paul C
2014-01-01
We demonstrate that the steep decay and long plateau in the early phases of gamma ray burst (GRB) afterglows are naturally produced in the collapsar model, by a means ultimately related to the dynamics of relativistic jet propagation through a massive star. We present hydrodynamical simulations which start from a collapsar engine and evolve all the way through the late afterglow phase. The resultant outflow includes a jet core which is highly relativistic after breaking out of the star, but becomes baryon-loaded and less relativistic after colliding with a massive outer shell, corresponding to mass from the stellar atmosphere of the progenitor star which became trapped in front of the jet core at breakout. The prompt emission produced before or during this collision would then have the signature of a high Lorentz factor jet, but the afterglow is produced by the amalgamated post-collision ejecta which has more inertia than the original highly relativistic jet core and thus has a delayed deceleration. This natu...
Shanna Lara Miglioranzi
2011-12-01
Full Text Available OBJETIVO: verificar a relação entre capacidade vital (CV, tempos máximos de fonação de /e/ fechado emitido de forma áfona (TMF/ė/ e de /s/ (TMF/s/ e estatura em mulheres adultas. MÉTODO: 48 indivíduos do sexo feminino, entre 18 e 44 anos, com ausência de fatores intervenientes nas medidas de interesse (tabagistas, atletas, cantores, alterações pulmonares, articulatórias, tiveram suas medidas de CV, TMF/ė/ e TMF/s/ coletadas, três vezes cada, selecionando-se o maior valor obtido para cada variável, além da estatura auto-referida. Os valores das quatro variáveis do grupo foram comparados entre si por meio de análise estatística. Utilizou-se o coeficiente de correlação de Spearman para verificar sua relação; o teste de Wilcoxon para amostras relacionadas para comparar os TMF/s/ e TMF/ė/, além do cálculo do coeficiente de variação para comparar a homogeneidade dessas variáveis. RESULTADOS: correlação positiva significante entre: CV e TMF/s/ (r=0,326; P=0,024; CV e TMF/ė/ (r=0,379; P=0,008; TMF/s/ e TMF/ė/ (r=0,360; P=0,012; e CV e estatura (r=0,432; P=0,002. TMF/s/ significantemente maior do que TMF/ė/. TMF/ė/ da amostra (10,43s significantemente menor que os valores de referência (PPURPOSE: to check the relation among the values of vital capacity (CV, maximum phonation times (MPT of closed voiceless /e/ (/ė/ and of /s/ and height in adult normal women. METHOD: 48 females, between 18 and 44 years, with no intervening factors in measures of interest (smoking, sport practicing, singing, lung disorder, articulation disorder collected their measures of VC, MPT/ė/ and MPT/s/, three times each, and the highest produced values for each variable were selected for analysis, beyond the self-reported height. All four variables were compared. Spearman's correlation coefficient was used to check the relationship; Wilcoxon test for related samples was used to compare MPT/s/ and MPT/ė/, such as the coefficient of variation
Ibarburu, Idoia; Aznar, Rosa; Elizaquível, Patricia; García-Quintáns, Nieves; López, Paloma; Munduate, Arantza; Irastorza, Ana; Dueñas, María Teresa
2010-09-30
Ropiness in natural cider is a relatively frequent alteration, mainly found after bottling, leading to consumer rejection. It is derived from the production of exopolysaccharides (EPS) by some lactic acid bacteria most of which synthesize a 2-branched (1,3)-beta-D-glucan and belong to the genera Pediococcus, Lactobacillus and Oenococcus. This polysaccharide synthesis is controlled by a single transmembrane glycosyltransferase (GTF). In this work, a method based on quantitative PCR (qPCR) and targeting the gtf gene was developed for detection and quantification of these bacteria in cider. The newly designed primers GTF3/GTF4 delimit a 151bp fragment within the 417bp amplicon previously designed for conventional PCR. The inclusivity and exclusivity of the qPCR assay were assessed with 33 cider isolates belonging to genus Lactobacillus, Oenoccocus and Pedioccocus, together with reference strains of 16 species and five genera including beta-glucan, alpha-glucan and heteropolysaccharide (HePS) producing strains and non-EPS producers. The qPCR assay, followed by the melting curve analysis, confirmed the generation of a single PCR product from the beta-glucan producers with a T(m) of 74.28+/-0.08 and C(T) values (10ng DNA) ranging between 8.46 and 16.88 (average 12.67+/-3.5). Some EPS(-) LAB strains rendered C(T) values ranging from 28.04 to 37.75 but they were significantly higher (P(C(T)quantification range of 5 log units using calibrated cell suspensions of Pediococcus parvulus 2.6 and Oenococcus oeni I4. The linearity was extended over 7 log orders when calibration curves were obtained from DNA. The detection limit for beta-glucan producing LAB in artificially contaminated cider was about 3x10(2)CFU per ml. The newly developed qPCR assay was successfully applied to monitor the cidermaking process, in 13 tanks from two cider factories, revealing a decrease in C(T) values derived from an increase in beta-glucan producing LAB populations. In addition, 8 naturally spoiled
Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization
Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue
2010-01-01
This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...
Vaudrey, A; Lanzetta, F; Glises, R
2009-01-01
Producing useful electrical work in consuming chemical energy, the fuel cell have to reject heat to its surrounding. However, as it occurs for any other type of engine, this thermal energy cannot be exchanged in an isothermal way in finite time through finite areas. As it was already done for various types of systems, we study the fuel cell within the finite time thermodynamics framework and define an endoreversible fuel cell. Considering different types of heat transfer laws, we obtain an optimal value of the operating temperature, corresponding to a maximum produced power. This analysis is a first step of a thermodynamical approach of design of thermal management devices, taking into account performances of the whole system.
Ripoll, G.; Alvarez-Rodriguez, J.; Sanz, A.; Joy, M.
2014-06-01
The effects of grazing on the carcasses and meat of light lambs are unclear, mainly due to variations in weather conditions and pasture production, which affect the growth of lambs and the quality of their carcasses. The aim of this study was to evaluate the effect of feeding systems, which varied in intensification due to the use of concentrate, on the growth and carcass traits of light lambs and the capability of these feeding systems to produce homogeneous lamb carcasses over the course of several years. The average daily weight gain of grazing lambs, but not lambs fed indoors was affected over years. The colour of the Rectus abdominis muscle and the amount of fat were more variable in grazing lambs (from 2.7 to 6.3) than indoor lambs (from 4.5 to 5.1). Grazing feeding systems without concentrate supplementation are more dependent than indoor feeding systems on the year. This climatologic dependence could lead to slaughter of older grazing lambs (77 days) to achieve the target slaughter weight when temperatures are low or the rainfall great. All feeding systems evaluated produced light lambs carcasses with a conformation score from O to R that is required by the market. Even the potential change in fat colour found in both grazing treatments was not enough to change the subjective evaluation of fat colour. (Author)
Guillermo Ripoll
2014-02-01
Full Text Available The effects of grazing on the carcasses and meat of light lambs are unclear, mainly due to variations in weather conditions and pasture production, which affect the growth of lambs and the quality of their carcasses. The aim of this study was to evaluate the effect of feeding systems, which varied in intensification due to the use of concentrate, on the growth and carcass traits of light lambs and the capability of these feeding systems to produce homogeneous lamb carcasses over the course of several years. The average daily weight gain of grazing lambs, but not lambs fed indoors was affected over years. The colour of the Rectus abdominis muscle and the amount of fat were more variable in grazing lambs (from 2.7 to 6.3 than indoor lambs (from 4.5 to 5.1. Grazing feeding systems without concentrate supplementation are more dependent than indoor feeding systems on the year. This climatologic dependence could lead to slaughter of older grazing lambs (77 days to achieve the target slaughter weight when temperatures are low or the rainfall great. All feeding systems evaluated produced light lambs carcasses with a conformation score from O to R- that is required by the market. Even the potential change in fat colour found in both grazing treatments was not enough to change the subjective evaluation of fat colour.
Enhancement of the maximum proton energy by funnel-geometry target in laser-plasma interactions
Yang, Peng; Fan, Dapeng; Li, Yuxiao
2016-09-01
Enhancement of the maximum proton energy using a funnel-geometry target is demonstrated through particle simulations of laser-plasma interactions. When an intense short-pulse laser illuminate a thin foil target, the foil electrons are pushed by the laser ponderomotive force, and then form an electron cloud at the target rear surface. The electron cloud generates a strong electrostatic field, which accelerates the protons to high energies. If there is a hole in the rear of target, the shape of the electron cloud and the distribution of the protons will be affected by the protuberant part of the hole. In this paper, a funnel-geometry target is proposed to improve the maximum proton energy. Using particle-in-cell 2-dimensional simulations, the transverse electric field generated by the side wall of four different holes are calculated, and protons inside holes are restricted to specific shapes by these field. In the funnel-geometry target, more protons are restricted near the center of the longitudinal accelerating electric field, thus protons experiencing longer accelerating time and distance in the sheath field compared with that in a traditional cylinder hole target. Accordingly, more and higher energy protons are produced from the funnel-geometry target. The maximum proton energy is improved by about 4 MeV compared with a traditional cylinder-shaped hole target. The funnel-geometry target serves as a new method to improve the maximum proton energy in laser-plasma interactions.
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Cavallini, L; Mecca, A; Pinci, D; Akchurin, N; Berntzon, L; Kim, H; Roh, Y; Wigmans, R; Cardini, A; Ferrari, R; Franchino, S; Gaudio, G; Livan, M; Hauptman, J; La Rotonda, L; Meoni, E; Policicchio, A; Susinno, G; Paar, H; Penzo, A; Popescu, S; Vandelli, W
2008-01-01
On beam tests were carried out on PbWO4 crystals. One of the aims of this work was to evaluate the contribution of the Čerenkov component to the total light yield. The difference in the timing characteristics of the fast Čerenkov signals with respect to the scintillation ones, which are emitted with a decay time of about 10 ns, can be exploited in order to separate the two proportions. In this paper we present the results of an analysis performed on the time structure of signals, showing how it is possible to detect and assess the presence and the amount of Čerenkov light. Since Čerenkov light is emitted only by the electromagnetic component of a hadronic shower, its precise measurement would allow to account for one of the dominant sources of fluctuations in hadronic showers and to achieve an improvement in the energy resolution of a hadronic calorimeter.
The optimal polarizations for achieving maximum contrast in radar images
Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.
1988-01-01
There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.
Quality, precision and accuracy of the maximum No. 40 anemometer
Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)
1996-12-31
This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.
Maximum Photovoltaic Penetration Levels on Typical Distribution Feeders: Preprint
Hoke, A.; Butler, R.; Hambrick, J.; Kroposki, B.
2012-07-01
This paper presents simulation results for a taxonomy of typical distribution feeders with various levels of photovoltaic (PV) penetration. For each of the 16 feeders simulated, the maximum PV penetration that did not result in steady-state voltage or current violation is presented for several PV location scenarios: clustered near the feeder source, clustered near the midpoint of the feeder, clustered near the end of the feeder, randomly located, and evenly distributed. In addition, the maximum level of PV is presented for single, large PV systems at each location. Maximum PV penetration was determined by requiring that feeder voltages stay within ANSI Range A and that feeder currents stay within the ranges determined by overcurrent protection devices. Simulations were run in GridLAB-D using hourly time steps over a year with randomized load profiles based on utility data and typical meteorological year weather data. For 86% of the cases simulated, maximum PV penetration was at least 30% of peak load.
RATIONALE: Analysis of bacteria by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) often relies upon sample preparation methods that result in cell lysis, e.g. bead-beating. However, Shiga toxin-producing Escherichia coli (STEC) can undergo bacteriophage...
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
De Angelis, R.; Consoli, F.; Verona, C.; Di Giorgio, G.; Andreoli, P.; Cristofari, G.; Cipriani, M.; Ingenito, F.; Marinelli, M.; Verona-Rinati, G.
2016-12-01
The paper reports about the use of single-crystal Chemical Vapour Deposited (CVD) diamonds as radiation detectors in laser-matter interaction experiments on the ABC laser in ENEA - Frascati. The detectors have been designed and realized by University of Tor Vergata - Rome. The interdigital configuration and the new design of the bias-tee voltage supply units guarantee a fast time response. The detectors are sensitive to soft-X photons and to particles. A remarkable immunity to electromagnetic noise, associated with the laser-target interaction, makes them especially useful for the measurements of the time of flight of fast particles. A novel diamond assembly has been tested in plasmas generated by the ABC laser in the nanosecond regime at intensities I=1013÷ 14 W/cm2, where contributions from X rays, fast electrons and ions could be observed.
Barty, C.P. [University of California, San Diego, Urey Hall, MC 0339 La Jolla, California 92093-0339 (United States); Gordon, C.L. III; Lemoff, B.E.; Yin, G.Y. [Stanford University, Edward L. Ginzton Laboratory, Stanford, California 94305 (United States); Bell, P.M. [Lawrence Livermore National Laboratory L-484, P.O. Box 808, Livermore, California 94550 (United States)
1996-05-01
30-fs, multiterawatt laser pulses are focused to intensities of {gt}10{sup 18} W/cm{sup 2} onto a solid Ta target to generate x-rays (10{endash}30 keV) for diagnostic imaging. Time gated detection is demonstrated as a technique for removal of scattered radiation and for the improvement of image contrast by a factor of nearly 5. {copyright} {ital 1996 American Institute of Physics.}
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation
Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.
2007-06-23
In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.
Individual Module Maximum Power Point Tracking for Thermoelectric Generator Systems
Vadstrup, Casper; Schaltz, Erik; Chen, Min
2013-07-01
In a thermoelectric generator (TEG) system the DC/DC converter is under the control of a maximum power point tracker which ensures that the TEG system outputs the maximum possible power to the load. However, if the conditions, e.g., temperature, health, etc., of the TEG modules are different, each TEG module will not produce its maximum power. If each TEG module is controlled individually, each TEG module can be operated at its maximum power point and the TEG system output power will therefore be higher. In this work a power converter based on noninverting buck-boost converters capable of handling four TEG modules is presented. It is shown that, when each module in the TEG system is operated under individual maximum power point tracking, the system output power for this specific application can be increased by up to 8.4% relative to the situation when the modules are connected in series and 16.7% relative to the situation when the modules are connected in parallel.
Ravely Casarotti Orlandelli
2016-06-01
Full Text Available Endophytic fungi have been described as producers of important bioactive compounds; however, they remain under-exploited as exopolysaccharides (EPS sources. Therefore, this work reports on EPS production by submerged cultures of eight endophytes isolated from Piper hispidum Sw., belonging to genera Diaporthe, Marasmius, Phlebia, Phoma, Phyllosticta and Schizophyllum. After fermentation for 96 h, four endophytes secreted EPS: Diaporthe sp. JF767000, Diaporthe sp. JF766998, Diaporthe sp. JF767007 and Phoma herbarum JF766995. The EPS from Diaporthe sp. JF766998 differed statistically from the others, with a higher percentage of carbohydrate (91% and lower amount of protein (8%. Subsequently, this fungus was grown under submerged culture for 72, 96 and 168 h (these EPS were designated EPSD1-72, EPSD1-96 and EPSD1-168 and the differences in production, monosaccharide composition and apparent molecular were compared. The EPS yields in mg/100 mL of culture medium were: 3.0 ± 0.4 (EPSD1-72, 15.4 ± 2.2 (EPSD1-96 and 14.8 ± 1.8 (EPSD1-168. The EPSD1-72 had high protein content (28.5% and only 71% of carbohydrate; while EPSD1-96 and EPSD1-168 were composed mainly of carbohydrate (≈95 and 100%, respectively, with low protein content (≈5% detected at 96 h. Galactose was the main monosaccharide component (30% of EPSD1-168. Differently, EPSD1-96 was rich in glucose (51%, with molecular weight of 46.6 kDa. It is an important feature for future investigations, because glucan-rich EPS are reported as effective antitumor agents.
J. B. Fletcher
1994-06-01
Full Text Available he M 7.4 Landers earthquake triggered widespread seismicity in the Western U.S. Because the transient dynamic stresses induced at regional distances by the Landers surface waves are much larger than the expected static stresses, the magnitude and the characteristics of the dynamic stresses may bear upon the earthquake triggering mechanism. The Landers earthquake was recorded on the UPSAR array, a group of 14 triaxial accelerometers located within a 1-square-km region 10 km southwest of the town of Parkfield, California, 412 km northwest of the Landers epicenter. We used a standard geodetic inversion procedure to determine the surface strain and stress tensors as functions of time from the observed dynamic displacements. Peak dynamic strains and stresses at the Earth's surface are about 7 microstrain and 0.035 MPa, respectively, and they have a flat amplitude spectrum between 2 s and 15 s period. These stresses agree well with stresses predicted from a simple rule of thumb based upon the ground velocity spectrum observed at a single station. Peak stresses ranged from about 0.035 MPa at the surface to about 0.12 MPa between 2 and 14 km depth, with the sharp increase of stress away from the surface resulting from the rapid increase of rigidity with depth and from the influence of surface wave mode shapes. Comparison of Landers-induced static and dynamic stresses at the hypocenter of the Big Bear aftershock provides a clear example that faults are stronger on time scales of tens of seconds than on time scales of hours or longer.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Coleman, Neil M; Abramson, Lee R; Coleman, Fiona A B
2012-03-01
This study examines the past and future impact of nuclear reactors on anthropogenic carbon emissions to the atmosphere. If nuclear power had never been commercially developed, what additional global carbon emissions would have occurred? More than 44 y of global nuclear power have caused a lag time of at least 1.2 y in carbon emissions and CO2 concentrations through the end of 2009. This lag time incorporates the contribution of life cycle carbon emissions due to the construction and operation of nuclear plants. Cumulative global carbon emissions would have been about 13 Gt greater through 2009, and the mean annual CO2 concentration at Mauna Loa would have been ~2.7 ppm greater than without nuclear power. This study finds that an additional 14–17 Gt of atmospheric carbon emissions could be averted by the global use of nuclear power through 2030, for a cumulative total of 27–30 Gt averted during the period 1965–2030. This result is based on International Atomic Energy Agency projections of future growth in nuclear power from 2009–2030, modified by the recent loss or permanent shutdown of 14 reactors in Japan and Germany
Taşkın, Hatıra; Baktemur, Gökhan; Kurul, Mehmet; Büyükalaca, Saadet
2013-01-01
This study was performed for comparison of meristem culture technique with shoot tip culture technique for obtaining virus-free plant, comparison of micropropagation success of two different nutrient media, and determination of effectiveness of real-time PCR assay for the detection of viruses. Two different garlic species (Allium sativum and Allium tuncelianum) and two different nutrient media were used in this experiment. Results showed that Medium 2 was more successful compared to Medium 1 for both A. tuncelianum and A. sativum (Kastamonu garlic clone). In vitro plants obtained via meristem and shoot tip cultures were tested for determination of onion yellow dwarf virus (OYDV) and leek yellow stripe virus (LYSV) through real-time PCR assay. In garlic plants propagated via meristem culture, we could not detect any virus. OYDV and LYSV viruses were detected in plants obtained via shoot tip culture. OYDV virus was observed in amount of 80% and 73% of tested plants for A. tuncelianum and A. sativum, respectively. LYSV virus was found in amount of 67% of tested plants of A. tuncelianum and in amount of 87% of tested plants of A. sativum in this study.
Hatıra Taşkın
2013-01-01
Full Text Available This study was performed for comparison of meristem culture technique with shoot tip culture technique for obtaining virus-free plant, comparison of micropropagation success of two different nutrient media, and determination of effectiveness of real-time PCR assay for the detection of viruses. Two different garlic species (Allium sativum and Allium tuncelianum and two different nutrient media were used in this experiment. Results showed that Medium 2 was more successful compared to Medium 1 for both A. tuncelianum and A. sativum (Kastamonu garlic clone. In vitro plants obtained via meristem and shoot tip cultures were tested for determination of onion yellow dwarf virus (OYDV and leek yellow stripe virus (LYSV through real-time PCR assay. In garlic plants propagated via meristem culture, we could not detect any virus. OYDV and LYSV viruses were detected in plants obtained via shoot tip culture. OYDV virus was observed in amount of 80% and 73% of tested plants for A. tuncelianum and A. sativum, respectively. LYSV virus was found in amount of 67% of tested plants of A. tuncelianum and in amount of 87% of tested plants of A. sativum in this study.
Farrell, A P; Steffensen, J F
1987-01-01
The maximum aerobic swimming speed of Chinook salmon (Oncorhynchus tshawytscha) was measured before and after ligation of the coronary artery. Coronary artery ligation prevented blood flow to the compact layer of the ventricular myocardium, which represents 30% of the ventricular mass, and produced...... a statistically significant 35.5% reduction in maximum swimming speed. We conclude that the coronary circulation is important for maximum aerobic swimming and implicit in this conclusion is that maximum cardiac performance is probably necessary for maximum aerobic swimming performance....
Li, Xingbao; Hartwell, Karen J; Borckardt, Jeffery; Prisciandaro, James J; Saladin, Michael E; Morgan, Paul S; Johnson, Kevin A; Lematty, Todd; Brady, Kathleen T; George, Mark S
2013-07-01
Numerous research groups are now using analysis of blood oxygen level-dependent (BOLD) functional magnetic resonance imaging (fMRI) results and relaying back information about regional activity in their brains to participants in the scanner in 'real time'. In this study, we explored the feasibility of self-regulation of frontal cortical activation using real-time fMRI (rtfMRI) neurofeedback in nicotine-dependent cigarette smokers during exposure to smoking cues. Ten cigarette smokers were shown smoking-related visual cues in a 3 Tesla MRI scanner to induce their nicotine craving. Participants were instructed to modify their craving using rtfMRI feedback with two different approaches. In a 'reduce craving' paradigm, participants were instructed to 'reduce' their craving, and decrease the anterior cingulate cortex (ACC) activity. In a separate 'increase resistance' paradigm, participants were asked to increase their resistance to craving and to increase middle prefrontal cortex (mPFC) activity. We found that participants were able to significantly reduce the BOLD signal in the ACC during the 'reduce craving' task (P=0.028). There was a significant correlation between decreased ACC activation and reduced craving ratings during the 'reduce craving' session (P=0.011). In contrast, there was no modulation of the BOLD signal in mPFC during the 'increase resistance' session. These preliminary results suggest that some smokers may be able to use neurofeedback via rtfMRI to voluntarily regulate ACC activation and temporarily reduce smoking cue-induced craving. Further research is needed to determine the optimal parameters of neurofeedback rtfMRI, and whether it might eventually become a therapeutic tool for nicotine dependence. © 2012 The Authors, Addiction Biology © 2012 Society for the Study of Addiction.
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
González-Escalona, Narjol; Hammack, Thomas S; Russell, Mindi; Jacobson, Andrew P; De Jesús, Antonio J; Brown, Eric W; Lampel, Keith A
2009-06-01
Salmonella enterica contamination in foods is a significant concern for public health. When DNA detection methods are used for analysis of foods, one of the major concerns is false-positive results from the detection of dead cells. To circumvent this crucial issue, a TaqMan quantitative real-time RT-PCR (qRT-PCR) assay with an RNA internal control was developed. invA RNA standards were used to determine the detection limit of this assay as well as to determine invA mRNA levels in mid-exponential-, late-exponential-, and stationary-phase cells. This assay has a detection limit of 40 copies of invA mRNA per reaction. The levels of invA mRNA in mid-exponential-, late-exponential-, and stationary-phase S. enterica cells was approximately 1 copy per 3 CFU, 1 copy per CFU, and 4 copies per 10(3) CFU, respectively. Spinach, tomatoes, jalapeno peppers, and serrano peppers were artificially contaminated with four different Salmonella serovars at levels of 10(5) and less than 10 CFU. These foods were analyzed with qRT-PCR and with the FDA's Bacteriological Analytical Manual Salmonella culture method (W. A. Andrews and T. S. Hammack, in G. J. Jackson et al., ed., Bacteriological analytical manual online, http://www.cfsan.fda.gov/ approximately ebam/bam-5.html, 2007). Comparable results were obtained by both methods. Only live Salmonella cells could be detected by this qRT-PCR assay, thus avoiding the dangers of false-positive results from nonviable cells. False negatives (inhibition of the PCR) were also ruled out through the use of an RNA internal control. This assay allows for the fast and accurate detection of viable Salmonella spp. in spinach, tomatoes, and in both jalapeno and serrano peppers.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR
SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM
1994-01-01
In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the est
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
Information Entropy Production of Spatio-Temporal Maximum Entropy Distributions
Cofre, Rodrigo
2015-01-01
Spiking activity from populations of neurons display causal interactions and memory effects. Therefore, they are expected to show some degree of irreversibility in time. Motivated by the spike train statistics, in this paper we build a framework to quantify the degree of irreversibility of any maximum entropy distribution. Our approach is based on the transfer matrix technique, which enables us to find an homogeneous irreducible Markov chain that shares the same maximum entropy measure. We provide relevant examples in the context of spike train statistics
Performance of MIMO-OFDM system using Linear Maximum Likelihood Alamouti Decoder
Monika Aggarwal
2012-06-01
Full Text Available A MIMO-OFDM wireless communication system is a combination of MIMO and OFDM Technology. The combination of MIMO and OFDM produces a powerful technique for providing high data rates over frequency-selective fading channels. MIMO-OFDM system has been currently recognized as one of the most competitive technology for 4G mobile wireless systems. MIMO-OFDM system can compensate for the lacks of MIMO systems and give play to the advantages of OFDM system.In this paper , the bit error rate (BER performance using linear maximum likelihood alamouti combiner (LMLAC decoding technique for space time frequency block codes(STFBC MIMO-OFDM system with frequency offset (FO is being evaluated to provide the system with low complexity and maximum diversity. The simulation results showed that the scheme has the ability to reduce ICI effectively with a low decoding complexity and maximum diversity in terms of bandwidth efficiency and also in the bit error rate (BER performance especially at high signal to noise ratio.
Thomas T. Lei; Shawn W. Semones; John F. Walker; Barton D. Clinton; Erik T. Nilsen
2002-01-01
In the southern Appalachian forests, the regeneration of canopy trees is severely inhibited by Rhododendron maximum L., an evergreen understory shrub producing dense rhickets. While light availability is a major cause, other factors may also contribute to the absence of tree seedlings under R. maximum. We examined the effects of...
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
Influence of maximum decking charge on intensity of blasting vibration
无
2006-01-01
Based on the character of short-time non-stationary random signal, the relationship between the maximum decking charge and energy distribution of blasting vibration signals was investigated by means of the wavelet packet method. Firstly, the characteristics of wavelet transform and wavelet packet analysis were described. Secondly, the blasting vibration signals were analyzed by wavelet packet based on software MATLAB, and the change of energy distribution curve at different frequency bands were obtained. Finally, the law of energy distribution of blasting vibration signals changing with the maximum decking charge was analyzed. The results show that with the increase of decking charge, the ratio of the energy of high frequency to total energy decreases, the dominant frequency bands of blasting vibration signals tend towards low frequency and blasting vibration does not depend on the maximum decking charge.
Maximum power point tracking for optimizing energy harvesting process
Akbari, S.; Thang, P. C.; Veselov, D. S.
2016-10-01
There has been a growing interest in using energy harvesting techniques for powering wireless sensor networks. The reason for utilizing this technology can be explained by the sensors limited amount of operation time which results from the finite capacity of batteries and the need for having a stable power supply in some applications. Energy can be harvested from the sun, wind, vibration, heat, etc. It is reasonable to develop multisource energy harvesting platforms for increasing the amount of harvesting energy and to mitigate the issue concerning the intermittent nature of ambient sources. In the context of solar energy harvesting, it is possible to develop algorithms for finding the optimal operation point of solar panels at which maximum power is generated. These algorithms are known as maximum power point tracking techniques. In this article, we review the concept of maximum power point tracking and provide an overview of the research conducted in this area for wireless sensor networks applications.
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Resource-constrained maximum network throughput on space networks
Yanling Xing; Ning Ge; Youzheng Wang
2015-01-01
This paper investigates the maximum network through-put for resource-constrained space networks based on the delay and disruption-tolerant networking (DTN) architecture. Specifical y, this paper proposes a methodology for calculating the maximum network throughput of multiple transmission tasks under storage and delay constraints over a space network. A mixed-integer linear programming (MILP) is formulated to solve this problem. Simula-tions results show that the proposed methodology can successful y calculate the optimal throughput of a space network under storage and delay constraints, as wel as a clear, monotonic relationship between end-to-end delay and the maximum network throughput under storage constraints. At the same time, the optimization re-sults shine light on the routing and transport protocol design in space communication, which can be used to obtain the optimal network throughput.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Otto, Carolin; Hofmann, Jörg; Ruprecht, Klemens
2016-06-01
Patients with multiple sclerosis (MS), a chronic inflammatory disease of the central nervous system (CNS), typically have an intrathecal synthesis of immunoglobulin (Ig)G. Intrathecal IgG is produced by B lineage cells that entered the CNS, but why and when these cells invade the CNS of patients with MS is unknown. The intrathecal IgG response in patients with MS is polyspecific and part of it is directed against different common viruses (e.g. measles virus, rubella virus, varicella zoster virus). Strong and consistent evidence suggests an association of MS and Epstein-Barr virus (EBV) infection and EBV seroprevalence in patients with MS is practically 100%. However, intriguingly, despite of the universal EBV seroprevalence, the frequency of intrathecally produced IgG to EBV in patients with MS is much lower than that of intrathecally produced IgG to other common viruses. The acute phase of primary EBV infection is characterized by a strong polyclonal B cell activation. As typical for humoral immune responses against viruses, EBV specific IgG is produced only with a temporal delay after acute EBV infection. Aiming to put the above facts into a logical structure, we here propose the hypothesis that in individuals going on to develop MS antibody producing B lineage cells invade the CNS predominantly at the time of and triggered by acute primary EBV infection. Because at the time of acute EBV infection EBV IgG producing B lineage cells have not yet occurred, the hypothesis could explain the universal EBV seroprevalence and the low frequency of intrathecally produced IgG to EBV in patients with MS. Evidence supporting the hypothesis could be provided by large prospective follow-up studies of individuals with symptomatic primary EBV infection (infectious mononucleosis). Furthermore, the clarification of the molecular mechanism underlying an EBV induced invasion of B lineage cells into the CNS of individuals going on to develop MS could corroborate it, too. If true, our
Maximum information entropy: a foundation for ecological theory.
Harte, John; Newman, Erica A
2014-07-01
The maximum information entropy (MaxEnt) principle is a successful method of statistical inference that has recently been applied to ecology. Here, we show how MaxEnt can accurately predict patterns such as species-area relationships (SARs) and abundance distributions in macroecology and be a foundation for ecological theory. We discuss the conceptual foundation of the principle, why it often produces accurate predictions of probability distributions in science despite not incorporating explicit mechanisms, and how mismatches between predictions and data can shed light on driving mechanisms in ecology. We also review possible future extensions of the maximum entropy theory of ecology (METE), a potentially important foundation for future developments in ecological theory.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Sithambaram, Shanthakumar; Son, Young-Chan; Suib, Steven L.
2008-04-08
A method for forming an imine comprises reacting a first reactant comprising a hydroxyl functionality, a carbonyl functionality, or both a hydroxyl functionality and a carbonyl functionality with a second reactant having an amine functionality in the presence of ordered porous manganese-based octahedral molecular sieves and an oxygen containing gas at a temperature and for a time sufficient for the imine to be produced.
Ishida, Hiroshi; Hirose, Ryohei; Watanabe, Susumu
2012-10-01
The abdominal drawing-in maneuver (ADIM) is commonly used as a fundamental component of lumbar stabilization training programs. One potential limitation of lumbar stabilization programs is that it can be difficult and time consuming to train people to perform the ADIM. The transverse abdominis (TrA), internal oblique (IO), and external oblique (EO) muscles are the most powerful muscles involved in expiration. However, little is known about the differences in the recruitment of the abdominal muscles between the ADIM and breathe held at maximum expiratory level (maximum expiration). The thickness of the TrA and IO muscles was measured by ultrasound imaging, and the activity of the EO muscle was measured by electromyography (EMG) in 33 healthy male performing the ADIM and maximum expiration. Maximum expiration produced a significant increase in the thickness of the TrA and IO muscles compared to the ADIM (p muscle was significantly higher during maximum expiration than during the ADIM (p muscle was approximately 30% of the maximal voluntary contraction during maximum expiration. Thus, maximum expiration may be an effective method for training of co-activation of the lateral abdominal muscles.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Fenicia, Lucia; Fach, Patrick; van Rotterdam, Bart J; Anniballi, Fabrizio; Segerman, Bo; Auricchio, Bruna; Delibato, Elisabetta; Hamidjaja, Raditijo A; Wielinga, Peter R; Woudstra, Cedric; Agren, Joakim; De Medici, Dario; Knutsson, Rickard
2011-03-01
A real-time PCR method for detection and typing of BoNT-producing Clostridia types A, B, E, and F was developed on the framework of the European Research Project "Biotracer". A primary evaluation was carried out using 104 strains and 17 clinical and food samples linked to botulism cases. Results showed 100% relative accuracy, 100% relative sensitivity, 100% relative specificity, and 100% selectivity (inclusivity on 73 strains and exclusivity on 31 strains) of the real-time PCR against the reference cultural method combined with the standard mouse bioassay. Furthermore, a ring trial study performed at four different European laboratories in Italy, France, the Netherlands, and Sweden was carried out using 47 strains, and 30 clinical and food samples linked to botulism cases. Results showed a concordance of 95.7% among the four laboratories. The reproducibility generated a relative standard deviation in the range of 2.18% to 13.61%. Considering the high level of agreement achieved between the laboratories, this real-time PCR is a suitable method for rapid detection and typing of BoNT-producing Clostridia in clinical, food and environmental samples and thus support the use of it as an international standard method.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results...
Kaempfer, B. (Research Center Rossendorf, Dresden (Germany). Inst. of Nuclear and Hadron Physics); Kotte, R. (Research Center Rossendorf, Dresden (Germany). Inst. of Nuclear and Hadron Physics); Moesner, J. (Research Center Rossendorf, Dresden (Germany). Inst. of Nuclear and Hadron Physics); Neubert, W. (Research Center Rossendorf, Dresden (Germany). Inst. of Nuclear and Hadron Physics); Wohlfarth, D. (Research Center Rossendorf, Dresden (Germany). Inst. of Nuclear and Hadron Physics)
1994-01-01
Velocity correlations of intermediate mass fragments (IMFs) with Z[>=]3, produced in central and semi-central collisions of Au+Au at 100, 150, 250 and 400 A.MeV beam energy, are extracted from measurements with the FOPI (phase I) detector system at SIS in GSI Darmstadt. The comparison of the data with a Coulomb dominated final-state interaction model points to time scale of [tau][approx]25 fm/c or less for emitting IMFs from a radially expanding and fast-multifragmenting source with radius R[approx]14 fm. (orig.)
Maximum-entropy closure of hydrodynamic moment hierarchies including correlations.
Hughes, Keith H; Burghardt, Irene
2012-06-07
Generalized hydrodynamic moment hierarchies are derived which explicitly include nonequilibrium two-particle and higher-order correlations. The approach is adapted to strongly correlated media and nonequilibrium processes on short time scales which necessitate an explicit treatment of time-evolving correlations. Closure conditions for the extended moment hierarchies are formulated by a maximum-entropy approach, generalizing related closure procedures for kinetic equations. A self-consistent set of nonperturbative dynamical equations are thus obtained for a chosen set of single-particle and two-particle (and possibly higher-order) moments. Analytical results are derived for generalized Gaussian closures including the dynamic pair distribution function and a two-particle correction to the current density. The maximum-entropy closure conditions are found to involve the Kirkwood superposition approximation.
An application of Hamiltonian neurodynamics using Pontryagin's Maximum (Minimum) Principle.
Koshizen, T; Fulcher, J
1995-12-01
Classical optimal control methods, notably Pontryagin's Maximum (Minimum) Principle (PMP) can be employed, together with Hamiltonians, to determine optimal system weights in Artificial Neural dynamical systems. A new learning rule based on weight equations derived using PMP is shown to be suitable for both discrete- and continuous-time systems, and moreover, can also be applied to feedback networks. Preliminary testing shows that this PMP learning rule compares favorably with Standard BackPropagations (SBP) on the XOR problem.
A probabilistic approach to the concept of Probable Maximum Precipitation
Papalexiou, S. M.; D. Koutsoyiannis
2006-01-01
International audience; The concept of Probable Maximum Precipitation (PMP) is based on the assumptions that (a) there exists an upper physical limit of the precipitation depth over a given area at a particular geographical location at a certain time of year, and (b) that this limit can be estimated based on deterministic considerations. The most representative and widespread estimation method of PMP is the so-called moisture maximization method. This method maximizes observed storms assuming...
Maximum Likelihood Sequence Detection Receivers for Nonlinear Optical Channels
2015-01-01
The space-time whitened matched filter (ST-WMF) maximum likelihood sequence detection (MLSD) architecture has been recently proposed (Maggio et al., 2014). Its objective is reducing implementation complexity in transmissions over nonlinear dispersive channels. The ST-WMF-MLSD receiver (i) drastically reduces the number of states of the Viterbi decoder (VD) and (ii) offers a smooth trade-off between performance and complexity. In this work the ST-WMF-MLSD receiver is investigated in detail. We...
Luciane M. Steffen
2004-08-01
Full Text Available A paralisia de prega vocal (PPV decorre da lesão do nervo vago ou de seus ramos, podendo levar a alterações das funções que requerem o fechamento glótico. O tempo máximo de fonação (TMF é um teste aplicado rotineiramente em pacientes disfônicospara avaliar a eficiência glótica e freqüentemente utilizado em casos de PPV, cujos valores encontram-se diminuídos. A classificação clínica clássica da posição da prega vocal paralisada em mediana, para-mediana, intermediária e em abdução ou cadavérica tem sido objeto de controvérsias. OBJETIVO: Verificar a associação e correlação entre os TMF e posição da prega vocal paralisada (PVP, TMF e ângulo de afastamento da PVP, medir o ângulo de afastamento da linha média das diferentes posições da PVP e correlacioná-lo com a sua classificação clínica FORMA DE ESTUDO: Clínico retrospectivo. MATERIAL E MÉTODO: Foram revisados os prontuários e analisados os exames videoendoscópicos de 86 indivíduos com paralisia de prega vocal unilateral e medido o ângulo de afastamento da PVP por meio de um programa computadorizado. RESULTADOS: A associação e correlação entre os TMF em cada posição assumida pela PVP têm significância estatística somente para /z/ na posição mediana. A associação e correlação entre TMF com ângulo de afastamento da PVP guardam relação para /i/, /u/. Ao associar e correlacionar medidas de ângulo com posição observa-se significância estatística em posição de abdução. CONCLUSÕES: Neste estudo não foi possível determinar as posições assumidas pela PVP por meio dos TMF nem correlacioná-las com medidas do ângulo.Vocal fold paralysis (VFP is due to an injury of the vagus nerve or one of its branches and may cause dysfunctions in the glottic competence. The Maximum Phonation Time (MPT is a test usually applied on dysphonic patients to assess glottic efficiency, mainly in patients with VFP and a decreased phonation time. The
PTree: pattern-based, stochastic search for maximum parsimony phylogenies
Ivan Gregor
2013-06-01
Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.
MAXIMUM POWEWR POINT TRACKING SYSTEM FOR PHOTOVOLTAIC STATION: A REVIEW
I. Elzein
2015-01-01
Full Text Available In recent years there has been a growing attention towards the use of renewable energy sources. Among them solar energy is one of the most promising green energy resources due to its environment sustainability and inexhaustibility. However photovoltaic systems (PhV suffer from big cost of equipment and low efficiency. Moreover, the solar cell V-I characteristic is nonlinear and varies with irradiation and temperature. In general, there is a unique point of PhV operation, called the Maximum Power Point (MPP, in which the PV system operates with maximum efficiency and produces its maximum output power. The location of the MPP is not known in advance, but can be located, either through calculation models or by search algorithms. Therefore MPPT techniques are important to maintain the PV array’s high efficiency. Many different techniques for MPPT are discussed. This review paper hopefully will serve as a convenient tool for future work in PhV power conversion.
Mroczka Janusz
2014-12-01
Full Text Available Photovoltaic panels have a non-linear current-voltage characteristics to produce the maximum power at only one point called the maximum power point. In the case of the uniform illumination a single solar panel shows only one maximum power, which is also the global maximum power point. In the case an irregularly illuminated photovoltaic panel many local maxima on the power-voltage curve can be observed and only one of them is the global maximum. The proposed algorithm detects whether a solar panel is in the uniform insolation conditions. Then an appropriate strategy of tracking the maximum power point is taken using a decision algorithm. The proposed method is simulated in the environment created by the authors, which allows to stimulate photovoltaic panels in real conditions of lighting, temperature and shading.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Önnberg, Anna; Söderquist, Bo; Persson, Katarina; Mölling, Paula
2014-11-01
The emergence of extended-spectrum β-lactamase (ESBL)-producing Enterobacteriaceae is a major global concern. CTX-M is the dominating ESBL type worldwide, and CTX-M-15 is the most widespread CTX-M type. The dissemination of CTX-M appears to be in part due to global spread of the Escherichia coli clone O25b-ST131. However, the gene-encoding CTX-M is mainly located on mobile genetic elements, such as plasmids, that also promote the horizontal dissemination of the CTX-M genes. In this study, 152 CTX-M-producing E. coli isolated in 1999-2008 in Örebro County, Sweden, were typed using a commercial repetitive sequence-based PCR (the DiversiLab system), and the prevalence of ST131 was investigated by pabB PCR. Real-time PCR-based plasmid replicon typing was performed on 82 CTX-M-15-producing E. coli isolates. In general, the CTX-M-producing E. coli population was genetically diverse; however, ST131 was highly prevalent (27%), and the dominating clone in our area. The blaCTX -M-15 gene was mainly located on IncF plasmids (69%), but a relatively high proportion of IncI1 plasmids (29%) were also detected among E. coli with diverse rep-PCR patterns, indicating that horizontal transmission of IncI1 plasmids carrying blaCTX -M-15 may have occurred between different E. coli strains.
Triadic conceptual structure of the maximum entropy approach to evolution.
Herrmann-Pillath, Carsten; Salthe, Stanley N
2011-03-01
Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law.
Maximum Safety Regenerative Power Tracking for DC Traction Power Systems
Guifu Du
2017-02-01
Full Text Available Direct current (DC traction power systems are widely used in metro transport systems, with running rails usually being used as return conductors. When traction current flows through the running rails, a potential voltage known as “rail potential” is generated between the rails and ground. Currently, abnormal rises of rail potential exist in many railway lines during the operation of railway systems. Excessively high rail potentials pose a threat to human life and to devices connected to the rails. In this paper, the effect of regenerative power distribution on rail potential is analyzed. Maximum safety regenerative power tracking is proposed for the control of maximum absolute rail potential and energy consumption during the operation of DC traction power systems. The dwell time of multiple trains at each station and the trigger voltage of the regenerative energy absorbing device (READ are optimized based on an improved particle swarm optimization (PSO algorithm to manage the distribution of regenerative power. In this way, the maximum absolute rail potential and energy consumption of DC traction power systems can be reduced. The operation data of Guangzhou Metro Line 2 are used in the simulations, and the results show that the scheme can reduce the maximum absolute rail potential and energy consumption effectively and guarantee the safety in energy saving of DC traction power systems.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
Reconstruction of the glacial maximum recorded in the central Cantabrian Mountains (N Iberia)
Rodríguez-Rodríguez, Laura; Jiménez-Sánchez, Montserrat; José Domínguez-Cuesta, María
2014-05-01
The Cantabrian Mountains is a coastal range up to 2648 m altitude trending parallel to northern Iberian Peninsula edge at a maximum distance of 100 km inland (~43oN 5oW). Glacial sediments and landforms are generally well-preserved at altitudes higher than 1600 m, evidencing the occurrence of former glaciations. Previous research supports a regional glacial maximum prior to ca 38 cal ka BP and an advanced state of deglaciation by the time of the global Last Glacial Maximum (Jiménez-Sánchez et al., 2013). A geomorphological database has been produced in ArcGIS (1:25,000 scale) for an area about 800 km2 that partially covers the Redes Natural Reservation and Picos de Europa Regional Park. A reconstruction of the ice extent and flow pattern of the former glaciers is presented for this area, showing that an ice field was developed on the study area during the local glacial maximum. The maximum length of the ice tongues that drained this icefield was remarkably asymmetric between both slopes, recording 1 to 6 km-long in the northern slope and up to 19 km-long in southern one. The altitude difference between the glacier fronts of both mountain slopes was ca 100 m. This asymmetric character of the ice tongues is related to geologic and topo-climatic factors. Jiménez-Sánchez, M., Rodríguez-Rodríguez, L., García-Ruiz, J.M., Domínguez-Cuesta, M.J., Farias, P., Valero-Garcés, B., Moreno, A., Rico, M., Valcárcel, M., 2013. A review of glacial geomorphology and chronology in northern Spain: timing and regional variability during the last glacial cycle. Geomorphology 196, 50-64. Research funded by the CANDELA project (MINECO-CGL2012-31938). L. Rodríguez-Rodríguez is a PhD student with a grant from the Spanish national FPU Program (MECD).
Maximum entropy production in environmental and ecological systems.
Kleidon, Axel; Malhi, Yadvinder; Cox, Peter M
2010-05-12
The coupled biosphere-atmosphere system entails a vast range of processes at different scales, from ecosystem exchange fluxes of energy, water and carbon to the processes that drive global biogeochemical cycles, atmospheric composition and, ultimately, the planetary energy balance. These processes are generally complex with numerous interactions and feedbacks, and they are irreversible in their nature, thereby producing entropy. The proposed principle of maximum entropy production (MEP), based on statistical mechanics and information theory, states that thermodynamic processes far from thermodynamic equilibrium will adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate. This issue focuses on the latest development of applications of MEP to the biosphere-atmosphere system including aspects of the atmospheric circulation, the role of clouds, hydrology, vegetation effects, ecosystem exchange of energy and mass, biogeochemical interactions and the Gaia hypothesis. The examples shown in this special issue demonstrate the potential of MEP to contribute to improved understanding and modelling of the biosphere and the wider Earth system, and also explore limitations and constraints to the application of the MEP principle.
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M; Walker, La Tonya Nicole; Roberts, Barry L; Malczynski, Leonard A.
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plus two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconom ic impacts. The full report is contained in 27 volumes.
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M; Walker, La Tonya Nicole; Roberts, Barry L; Malczynski, Leonard A.
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plus two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.
Combining experiments and simulations using the maximum entropy principle.
Wouter Boomsma
2014-02-01
Full Text Available A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
Combining experiments and simulations using the maximum entropy principle.
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-02-01
A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
A maximum feasible subset algorithm with application to radiation therapy
Sadegh, Payman
1999-01-01
inequalities. Special classes of this problem are of interest in a variety of areas such as pattern recognition, machine learning, operations research, and medical treatment planning. This problem is generally solvable in exponential time. A heuristic polynomial time algorithm is presented in this paper......Consider a set of linear one sided or two sided inequality constraints on a real vector X. The problem of interest is selection of X so as to maximize the number of constraints that are simultaneously satisfied, or equivalently, combinatorial selection of a maximum cardinality subset of feasible...
Maximum likelihood Jukes-Cantor triplets: analytic solutions.
Chor, Benny; Hendy, Michael D; Snir, Sagi
2006-03-01
Maximum likelihood (ML) is a popular method for inferring a phylogenetic tree of the evolutionary relationship of a set of taxa, from observed homologous aligned genetic sequences of the taxa. Generally, the computation of the ML tree is based on numerical methods, which in a few cases, are known to converge to a local maximum on a tree, which is suboptimal. The extent of this problem is unknown, one approach is to attempt to derive algebraic equations for the likelihood equation and find the maximum points analytically. This approach has so far only been successful in the very simplest cases, of three or four taxa under the Neyman model of evolution of two-state characters. In this paper we extend this approach, for the first time, to four-state characters, the Jukes-Cantor model under a molecular clock, on a tree T on three taxa, a rooted triple. We employ spectral methods (Hadamard conjugation) to express the likelihood function parameterized by the path-length spectrum. Taking partial derivatives, we derive a set of polynomial equations whose simultaneous solution contains all critical points of the likelihood function. Using tools of algebraic geometry (the resultant of two polynomials) in the computer algebra packages (Maple), we are able to find all turning points analytically. We then employ this method on real sequence data and obtain realistic results on the primate-rodents divergence time.
Improved Minimum Cuts and Maximum Flows in Undirected Planar Graphs
Italiano, Giuseppe F
2010-01-01
In this paper we study minimum cut and maximum flow problems on planar graphs, both in static and in dynamic settings. First, we present an algorithm that given an undirected planar graph computes the minimum cut between any two given vertices in O(n log log n) time. Second, we show how to achieve the same O(n log log n) bound for the problem of computing maximum flows in undirected planar graphs. To the best of our knowledge, these are the first algorithms for those two problems that break the O(n log n) barrier, which has been standing for more than 25 years. Third, we present a fully dynamic algorithm that is able to maintain information about minimum cuts and maximum flows in a plane graph (i.e., a planar graph with a fixed embedding): our algorithm is able to insert edges, delete edges and answer min-cut and max-flow queries between any pair of vertices in O(n^(2/3) log^3 n) time per operation. This result is based on a new dynamic shortest path algorithm for planar graphs which may be of independent int...
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Kinematic analysis of sprinting pickup acceleration versus maximum sprinting speed
S. MANZER
2016-10-01
Full Text Available Pickup acceleration and maximum sprinting speed are two essential phases of the 100-m sprint with variant sprinting speed, step length, frequency and technique. The aim of the study was to describe and compare the kinematic parameters of both sprint variants. Hypothetically it was assumed to find differences in sprinting speed, step length, flight and contact times as well as between the body angles of different key positions. From 8 female and 8 male (N=16 track and field junior athletes a double stride of both sprint variants was filmed (200 Hz from a sagittal position and the 10-m-sprint time was measured using triple light barriers. Kinematic data for sprinting speed and angles of knee, hip and ankle were compared with an analysis of variance with repeated measures. The sprinting speed was 7.7 m/s and 8.0 m/s (female and 8.4 m/s and 9.2 m/s (male with significantly higher values of step length, flight time and shorter ground contact time during maximum sprinting speed. Because of the longer flight time, it is possible to place the foot closer to the body but with a more extended knee on the ground. These characteristics can be used as orientation for technique training.
Maximum work extraction and implementation costs for nonequilibrium Maxwell's demons
Sandberg, Henrik; Delvenne, Jean-Charles; Newton, Nigel J.; Mitter, Sanjoy K.
2014-10-01
We determine the maximum amount of work extractable in finite time by a demon performing continuous measurements on a quadratic Hamiltonian system subjected to thermal fluctuations, in terms of the information extracted from the system. The maximum work demon is found to apply a high-gain continuous feedback involving a Kalman-Bucy estimate of the system state and operates in nonequilibrium. A simple and concrete electrical implementation of the feedback protocol is proposed, which allows for analytic expressions of the flows of energy, entropy, and information inside the demon. This let us show that any implementation of the demon must necessarily include an external power source, which we prove both from classical thermodynamics arguments and from a version of Landauer's memory erasure argument extended to nonequilibrium linear systems.
Adaptive Parallel Tempering for Stochastic Maximum Likelihood Learning of RBMs
Desjardins, Guillaume; Bengio, Yoshua
2010-01-01
Restricted Boltzmann Machines (RBM) have attracted a lot of attention of late, as one the principle building blocks of deep networks. Training RBMs remains problematic however, because of the intractibility of their partition function. The maximum likelihood gradient requires a very robust sampler which can accurately sample from the model despite the loss of ergodicity often incurred during learning. While using Parallel Tempering in the negative phase of Stochastic Maximum Likelihood (SML-PT) helps address the issue, it imposes a trade-off between computational complexity and high ergodicity, and requires careful hand-tuning of the temperatures. In this paper, we show that this trade-off is unnecessary. The choice of optimal temperatures can be automated by minimizing average return time (a concept first proposed by [Katzgraber et al., 2006]) while chains can be spawned dynamically, as needed, thus minimizing the computational overhead. We show on a synthetic dataset, that this results in better likelihood ...
Prediction of Double Layer Grids' Maximum Deflection Using Neural Networks
Reza K. Moghadas
2008-01-01
Full Text Available Efficient neural networks models are trained to predict the maximum deflection of two-way on two-way grids with variable geometrical parameters (span and height as well as cross-sectional areas of the element groups. Backpropagation (BP and Radial Basis Function (RBF neural networks are employed for the mentioned purpose. The inputs of the neural networks are the length of the spans, L, the height, h and cross-sectional areas of the all groups, A and the outputs are maximum deflections of the corresponding double layer grids, respectively. The numerical results indicate that the RBF neural network is better than BP in terms of training time and performance generality.
Space weather Preparing for the Maximum of the Solar Cycle
Shaltout, Mosalam
: Space Environments Group preparing for the maximum of the solar cycle 24 where the current plan envisage that the National second Earth Research satellite EgyptSat2 will be launched in 2012. For that, forecasting the solar activity at 2012 is very important. The plan depend on the long-term prediction by using 10.7cm of Ottawa data (1947-2008) and applying fast Fourier transform FFT on this time series. Also, Using the Artificial Intelligence to predict the maximum activity by Fuzzy modeling. Also, Short-term prediction for Coronal mass ejection CMEs by the artificial satellite STEREO observations, beside other satellites as SOHO, Hinde, SDO, Solar orbiter sentinels, Solar Probe in collaboration with Paris Observatory in Meudon, France.
Probabilistic maximum-value wind prediction for offshore environments
Staid, Andrea; Pinson, Pierre; Guikema, Seth D.
2015-01-01
, and probabilistic forecasts result in greater value to the end-user. The models outperform traditional baseline forecast methods and achieve low predictive errors on the order of 1–2 m s−1. We show the results of their predictive accuracy for different lead times and different training methodologies....... statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...
Ziheng YANG
2004-01-01
众所周知,物种分化年代的估计对分子钟(进化速率恒定)假定很敏感.另一方面,在远缘物种(例如哺乳纲不同目的动物)的比较中,分子钟几乎总是不成立的.这样在估计分化时间时考虑不同进化区系的速率差异至为重要.最大似然法可以很自然地考虑这种速率差异,并且可以同时分析多个基因位点的资料以及同时利用多重化石校正数据.以前提出的似然法需要研究者将进化树的树枝按速率分组,本文提出一个近似方法以使这个过程自动化.本方法综合了以前的似然法、贝斯法及近似速率平滑法的一些特征.此外,还对算法加以改进,以适应综合数据分析时某些基因在某些物种中缺乏资料的情形.应用新提出的方法来分析马达加斯加的倭狐猴的分化年代,并与以前的似然法及贝斯法的分析进行了比较[动物学报50(4):645-656,2004].%Estimation of species divergence times is well-known to be sensitive to violation of the molecular clock assumption (rate constancy over time). However, the molecular clock is almost always violated in comparisons of distantly related species, such as different orders of mammals. Thus it is important to take into account different rates among lineages when divergence times are estimated. The maximum likelihood method provides a framework for accommodating rate variation and can naturally accommodate heterogeneous datasets from multiple loci and fossil calibrations at multiple nodes.Previous implementations of the likelihood method require the researcher to assign branches to different rate classes. In this paper, I implement a heuristic rate-smoothing algorithm (the AHRS algorithm) to automate the assignment of branches to rate groups. The method combines features of previous likelihood, Bayesian and rate-smoothing methods. The likelihood algorithm is also improved to accommodate missing sequences at some loci in the combined analysis. The new
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Caroline Dellinghausen Borges
2004-09-01
Full Text Available Biopolímeros são polissacarídeos microbianos. O biopolímero produzido por Beijerinckia sp. 7070 possui comportamento pseudoplástico e apresenta alta viscosidade em baixas velocidades de deformação, conferindo ao polímero excelentes características de suspensão. O objetivo desse trabalho foi caracterizar o biopolímero produzido por Beijerinckia sp. 7070 em diferentes tempos de cultivo, quanto à produção total, produção de polímeros de fibra longa e curta, produtividade, viscosidade e composição química. Os polímeros produzidos em meio YM líquido foram recuperados em diferentes tempos de cultivo, secos e pesados para determinação da produção e produtividade. O tipo de fibra produzido durante o cultivo foi avaliado microscopicamente. Viscosidades aparentes de solução aquosa 1% foram determinadas a 6, 12, 30 e 60rpm, a 25º C, em um viscosímetro Brookfield. A composição do biopolímero foi determinada por cromatografia em camada delgada comparativa. As maiores produções totais encontradas foram em 30 e 72h, a maior produtividade em 48h e a maior viscosidade em 72h. Os polímeros de fibra longa apresentaram uma tendência de tornarem-se mais longos com o tempo. A viscosidade do polímero de fibra longa foi maior que a do de fibra curta. Todos os biopolímeros apresentaram os mesmos componentes (glucose, galactose, fucose e ácido glucurônico mas em concentrações diferentes.Biopolymers are microbial polysaccharides. The biopolymer produced by Beijerinckia sp 7070 has pseudoplastic behaviour and shows high viscosity at low deformation rates, giving to polymer excellent suspension characteristics. The objective of this work was to characterize the biopolymer produced at different culture times by Beijerinckia sp 7070 in relation to total production, production of short and long fiber polymers, productivity, viscosity and chemical composition. The polymers produced in liquid YM medium were recovered at different
Siqueira Rodrigues, André Valentim
2006-10-01
Full Text Available AbstractThe stimulants caffeine effects on sport performance have been widely investigated. The Maximal Oxygen Uptake (MOU has been used inrecent researches which aim to elucidate mechanisms of caffeine during maximal effort. As a physiological pattern to evaluate the effect of caffeine during the effort and after it (recovery, plasmatic lactate is presented in many studies. In this context, the present study aimed to investigate physiological changes: VO2 MAX on an ergometric device (speed and grade on a treadmill; plasmatic lactate (L and modification of cognitive and motor performance (Reaction Time Test - RTT produced by caffeine. Five apparently healthy volunteers (26 ± 5 years; 67 ± 12.5 kg were submitted twice to the following routine: plasmatic lactate at rest (L 0, reaction time test at rest RTT (R, maximum effort test on treadmill, plasmatic lactate concentrations at minute 1 (L 1, 2 (L 2 and 4 (L 3after effort, and RTT (1. They were given either one placebo capsule (400 mg corn starch or caffeine (3 mg/kg of body weight. Two-way ANOVA with repetition was used to compare variables at placebo (P and caffeine (C moments. The caffeine moment presented non-significant reduction in RRT, non-significant increase in plasmatic lactate and non-significant modification in VO2 MAX, when compared to placebo moment. Thus, one can conclude that 3 mg/kg/bw of caffeine with 12h of abstinence, presented non-significant effects in maximal oxygen uptake, plasmatic lactate and in simple reaction time.ResumenLos efectos estimulantes de la cafeína en el desarrollo atlético vienen siendo ampliamente investigados. El consumo Máximo de Oxígeno (VO2 MAX ha sido empleado en estudios recientes que buscan elucidar los mecanismos de la cafeína durante el esfuerzo máximo através de métodos neurológico así como fisiológicos. En este contexto, este estudio objetiva analizar las variaciones generadas por la cafeína en respuestas ergoespirométrica (VO2
Tayarani-Binazir, Kayhan A; Jackson, Michael J; Fisher, Ria; Zoubiane, Ghada; Rose, Sarah; Jenner, Peter
2010-06-10
Dopa decarboxylase inhibitors are routinely used to potentiate the effects of L-DOPA in the treatment of Parkinson's disease. However, neither in clinical use nor in experimental models of Parkinson's disease have the timing and dose of dopa decarboxylase inhibitors been thoroughly explored. We now report on the choice of dopa decarboxylase inhibitors, dose and the time of dosing relationships of carbidopa, benserazide and L-alpha-methyl dopa (L-AMD) in potentiating the effects of L-DOPA in the 1-methyl-4-phenyl-1,2,3,6 tetrahydropyridine (MPTP)-treated common marmoset. Pre-treatment with benserazide for up to 3h did not alter the motor response to L-DOPA compared to simultaneous administration with L-DOPA. There was some evidence of a relationship between carbidopa and benserazide dose and increased locomotor activity and the reversal of motor disability. But in general, commonly used dose levels of dopa decarboxylase inhibitors appeared to produce a maximal motor response to L-DOPA. In contrast, dyskinesia intensity and duration continued to increase with both carbidopa and benserazide dose. The novel dopa decarboxylase inhibitor, L-AMD, increased locomotor activity and improved motor disability to the same extent as carbidopa or benserazide but importantly this was accompanied by significantly less dyskinesia. This study shows that currently, dopa decarboxylase inhibitors may be routinely employed in the MPTP-treated primate at doses which are higher than those necessary to produce a maximal potentiation of the anti-parkinsonian effect of L-DOPA. This may lead to excessive expression of dyskinesia in this model of Parkinson's disease and attention should be given to the dose regimens currently employed.
Young, Bruce Kai Fong
1988-09-01
The determination of level populations and detailed population mechanisms in dense plasmas has become an increasingly important problem in atomic physics. In this work, the density variation of line intensities and level populations in aluminum K-shell and molybdenum and silver L-shell emission spectra have been measured from high-powered, laser-produced plasmas. For each case, the density dependence of the observed line emission is due to the effect of high frequency electron-ion collisions on metastable levels. The density dependent line intensities vary greatly in laser-produced plasmas and can be used to extract detailed information concerning the population kinetics and level populations of the ions. The laser-plasmas had to be fully characterized in order to clearly compare the observed density dependence with atomic theory predictions. This has been achieved through the combined use of new diagnostic instruments and microdot targets which provided simultaneously space, time, and spectrally resolved data. The plasma temperatures were determined from the slope of the hydrogen-like recombination continuum. The time resolved electron density profiles were measured using multiple frame holographic interferometry. Thus, the density dependence of K-shell spectral lines could be clearly examined, independent of assumptions concerning the dynamics of the plasma. In aluminum, the electron density dependence of various helium-like line intensity ratios were measured. Standard collisional radiative equilibrium models fail to account for the observed density dependence measured for the ''He/sub ..cap alpha..//IC'' ratio. Instead, a quasi-steady state atomic model based on a purely recombining plasma is shown to accurately predict the measured density dependence. This same recombining plasma calculation successfully models the density dependence of the high-n ''He/sub ..gamma..//He/sub ..beta../'' and ''He/sub delta
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Orr, John L.
1997-01-01
In many ways, the typical approach to the handling of bibliographic material for generating review articles and similar manuscripts has changed little since the use of xerographic reproduction has become widespread. The basic approach is to collect reprints of the relevant material and place it in folders or stacks based on its dominant content. As the amount of information available increases with the passage of time, the viability of this mechanical approach to bibliographic management decreases. The personal computer revolution has changed the way we deal with many familiar tasks. For example, word processing on personal computers has supplanted the typewriter for many applications. Similarly, spreadsheets have not only replaced many routine uses of calculators but have also made possible new applications because the cost of calculation is extremely low. Objective The objective of this research was to use personal computer bibliographic software technology to support the determination of spacecraft maximum acceptable concentration (SMAC) values. Specific Aims The specific aims were to produce draft SMAC documents for hydrogen sulfide and tetrachloroethylene taking maximum advantage of the bibliographic software.
A maximum likelihood approach to estimating articulator positions from speech acoustics
Hogden, J.
1996-09-23
This proposal presents an algorithm called maximum likelihood continuity mapping (MALCOM) which recovers the positions of the tongue, jaw, lips, and other speech articulators from measurements of the sound-pressure waveform of speech. MALCOM differs from other techniques for recovering articulator positions from speech in three critical respects: it does not require training on measured or modeled articulator positions, it does not rely on any particular model of sound propagation through the vocal tract, and it recovers a mapping from acoustics to articulator positions that is linearly, not topographically, related to the actual mapping from acoustics to articulation. The approach categorizes short-time windows of speech into a finite number of sound types, and assumes the probability of using any articulator position to produce a given sound type can be described by a parameterized probability density function. MALCOM then uses maximum likelihood estimation techniques to: (1) find the most likely smooth articulator path given a speech sample and a set of distribution functions (one distribution function for each sound type), and (2) change the parameters of the distribution functions to better account for the data. Using this technique improves the accuracy of articulator position estimates compared to continuity mapping -- the only other technique that learns the relationship between acoustics and articulation solely from acoustics. The technique has potential application to computer speech recognition, speech synthesis and coding, teaching the hearing impaired to speak, improving foreign language instruction, and teaching dyslexics to read. 34 refs., 7 figs.
Comparison between two strength-training systems on the maximum muscular strength performance
W. Materko
2010-01-01
Full Text Available The goal of the present study was to compare traditional (ST and pyramid (SP strength training systems during eight weeks on the maximum muscular strength performance. Eighteen experienced in strength training men were divided into two groups of nine volunteers. Four times a week, the ST group trained in three sets of eight repetitions (75% of 1RM and the SP group in three sets of 10, eight and six repetitions (70%, 75% and 80% of 1RM, respectively. All subjects were submitted to an anthropometric evaluation, followed by 1RM test in the bench press and squat exercises, which were repeated after eight weeks of training. The difference between the attained 1RM for each system was studied using Mann-Whitney test and the Wilcoxon paired test was applied to compare pre- and post-training. No significant differences were recorded between ST and SP bench press (125 ± 19 kg and 120 ± 17.0 kg and squat (124 ± 18 kg and 120 ± 17 kg. Furthermore, no significant differences were found between the pre- and post-training periods. According to the present results, both training systems produced similar effects on the maximum muscular strength performance.
Comparison between two strength training systems on the maximum muscular strength performance
Wollner Materko
2010-06-01
Full Text Available The goal of the present study was to compare traditional (ST and pyramid (SP strength training systems during eight weeks on the maximum muscular strength performance. Eighteen experienced in strength training men were divided into two groups of nine volunteers. Four times a week, the ST group trained in three sets of eight repetitions (75% of 1RM and the SP group in three sets of 10, eight and six repetitions (70%, 75% and 80% of 1RM, respectively. All subjects were submitted to an anthropometric evaluation, followed by 1RM test in the bench press and squat exercises, which were repeated after eight weeks of training. The difference between the attained 1RM for each system was studied using Mann-Whitney test and the Wilcoxon paired test was applied to compare pre- and post-training. No significant differences were recorded between ST and SP bench press (125 ± 19 kg and 120 ± 17.0 kg and squat (124 ± 18 kg and 120 ± 17 kg. Furthermore, no significant differences were found between the pre- and post-training periods. According to the present results, both training systems produced similar effects on the maximum muscular strength performance.
Individual Module Maximum Power Point Tracking for a Thermoelectric Generator Systems
Vadstrup, Casper; Chen, Min; Schaltz, Erik
Thermo Electric Generator (TEG) modules are often connected in a series and/or parallel system in order to match the TEG system voltage with the load voltage. However, in order to be able to control the power production of the TEG system a DC/DC converter is inserted between the TEG system...... and the load. The DC/DC converter is under the control of a Maximum Power Point Tracker (MPPT) which insures that the TEG system produces the maximum possible power to the load. However, if the conditions, e.g. temperature, health, etc., of the TEG modules are different each TEG module will not produce its...... maximum power. The result of the system MPPT is therefore the best compromise of all the TEG modules in the system. On the other hand, if each TEG module is controlled individual, each TEG module can be operated in its maximum power point and the TEG system output power will therefore be higher...
Maximum Range of a Projectile Thrown from Constant-Speed Circular Motion
Poljak, Nikola
2016-11-01
The problem of determining the angle θ at which a point mass launched from ground level with a given speed v0 will reach a maximum distance is a standard exercise in mechanics. There are many possible ways of solving this problem, leading to the well-known answer of θ = π/4, producing a maximum range of D max = v0 2 / g , with g being the free-fall acceleration. Conceptually and calculationally more difficult problems have been suggested to improve student proficiency in projectile motion, with the most famous example being the Tarzan swing problem. The problem of determining the maximum distance of a point mass thrown from constant-speed circular motion is presented and analyzed in detail in this text. The calculational results confirm several conceptually derived conclusions regarding the initial throw position and provide some details on the angles and the way of throwing (underhand or overhand) that produce the maximum throw distance.
Fernanda Vargas Ferreira
2012-04-01
Full Text Available TEMA: Verificar os achados de força muscular respiratória (FMR, postura corporal (PC, intensidade vocal (IV e tempos máximos de fonação (TMF, em indivíduos com Doença de Parkinson (DP e casos de controle, conforme o sexo, o estágio da DP e o nível de atividade física (AF. PROCEDIMENTOS: três homens e duas mulheres com DP, entre 36 e 63 anos (casos de estudo - CE, e cinco indivíduos sem doenças neurológicas, pareados em idade, sexo e nível de AF (casos de controle - CC. Avaliadas a FMR, PC, IV e TMF. RESULTADOS: homens: diminuição mais acentuada dos TMF, IV e FMR nos parkinsonianos, mais alterações posturais nos idosos; mulheres com e sem DP: alterações posturais similares, relação positiva entre estágio, nível de AF e as demais medidas. CONCLUSÕES: Verificou-se nas parkinsonianas, prejuízo na IV e nos parkinsonianos déficits nos TMF, IV e FMR. Sugerem-se novos estudos sob um viés interdisciplinar.PURPOSE: To check the findings on respiratory muscular strength (RMS, body posture (BP, vocal intensity (VI and maximum phonation time (MPT, in patients with Parkinson Disease (PD and control cases, according to gender, Parkinson Disease stage (PD and the level of physical activity (PA. METHODS: three men and two women with PD, between 36 and 63 year old (study cases - SC, and five subjects without neurologic diseases, of the same age, gender and PA level (control cases - CC. We evaluated RMS, BP, VI and MPT. RESULTS: men: a more pronounced decrease of MPT, VI, RMS in Parkinson patients, plus postural alterations in the elderly; women: similar postural alterations, positive relation between stages, PA level and the other measures. CONCLUSIONS: We observed in women with PD, impaired VI; in men with PD deficits in MPT, VI, RMS. We suggest further studies under an interdisciplinary bias.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Škarabot, Jakob; Vigotsky, Andrew D.; Brown, Amanda Fernandes; Gomes, Thiago Matassoli; Novaes, Jefferson da Silva
2017-01-01
Background Foam rollers, or other similar devices, are a method for acutely increasing range of motion, but in contrast to static stretching, do not appear to have detrimental effects on neuromuscular performance. Purpose The purpose of this study was to investigate the effects of different volumes (60 and 120 seconds) of foam rolling of the hamstrings during the inter‐set rest period on repetition performance of the knee extension exercise. Methods Twenty‐five recreationally active females were recruited for the study (27.8 ± 3.6 years, 168.4 ± 7.2 cm, 69.1 ± 10.2 kg, 27.2 ± 2.1 m2/kg). Initially, subjects underwent a ten‐repetition maximum testing and retesting, respectively. Thereafter, the experiment involved three sets of knee extensions with a pre‐determined 10 RM load to concentric failure with the goal of completing the maximum number of repetitions. During the inter‐set rest period, either passive rest or foam rolling of different durations (60 and 120 seconds) in a randomized order was employed. Results Ninety‐five percent confidence intervals revealed dose‐dependent, detrimental effects, with more time spent foam rolling resulting in fewer repetitions (Cohen's d of 2.0 and 1.2 for 120 and 60 seconds, respectively, in comparison with passive rest). Conclusion The results of the present study suggest that more inter‐set foam rolling applied to the antagonist muscle group is detrimental to the ability to continually produce force. The finding that inter‐set foam rolling of the antagonist muscle group decreases maximum repetition performance has implications for foam rolling prescription and implementation, in both rehabilitation and athletic populations. Level of evidence 2b PMID:28217418
Monteiro, Estêvão Rios; Škarabot, Jakob; Vigotsky, Andrew D; Brown, Amanda Fernandes; Gomes, Thiago Matassoli; Novaes, Jefferson da Silva
2017-02-01
Foam rollers, or other similar devices, are a method for acutely increasing range of motion, but in contrast to static stretching, do not appear to have detrimental effects on neuromuscular performance. The purpose of this study was to investigate the effects of different volumes (60 and 120 seconds) of foam rolling of the hamstrings during the inter-set rest period on repetition performance of the knee extension exercise. Twenty-five recreationally active females were recruited for the study (27.8 ± 3.6 years, 168.4 ± 7.2 cm, 69.1 ± 10.2 kg, 27.2 ± 2.1 m(2)/kg). Initially, subjects underwent a ten-repetition maximum testing and retesting, respectively. Thereafter, the experiment involved three sets of knee extensions with a pre-determined 10 RM load to concentric failure with the goal of completing the maximum number of repetitions. During the inter-set rest period, either passive rest or foam rolling of different durations (60 and 120 seconds) in a randomized order was employed. Ninety-five percent confidence intervals revealed dose-dependent, detrimental effects, with more time spent foam rolling resulting in fewer repetitions (Cohen's d of 2.0 and 1.2 for 120 and 60 seconds, respectively, in comparison with passive rest). The results of the present study suggest that more inter-set foam rolling applied to the antagonist muscle group is detrimental to the ability to continually produce force. The finding that inter-set foam rolling of the antagonist muscle group decreases maximum repetition performance has implications for foam rolling prescription and implementation, in both rehabilitation and athletic populations. 2b.
Identification of the feature that causes the I-band secondary maximum of a type Ia supernova
Jack, D; Hauschildt, P H
2015-01-01
We obtained a time series of spectra covering the secondary maximum in the I-band of the bright Type Ia supernova 2014J in M82 with the TIGRE telescope. Comparing the observations with theoretical models calculated with the time dependent extension of the PHOENIX code, we identify the feature that causes the secondary maximum in the I-band light curve. Fe II 3d6(3D)4s-3d6(5D)4p and similar high excitation transitions produce a blended feature at 7500 {\\AA}, which causes the rise of the light curve towards the secondary maximum. The series of observed spectra of SN 2014J and archival data of SN 2011fe confirm this conclusion. We further studied the plateau phase of the Rband light curve of SN 2014J and searched for features which contribute to the flux. The theoretical models do not clearly indicate a new feature that may cause the Rband plateau phase. However, Co II features in the range of 6500 - 7000 {\\AA} and the Fe II feature of the I-band are clearly seen in the theoretical spectra, but do not appear to ...
GROWTH ANALYSIS AND ASSESSMENT OF PIG’S BIOLOGICAL MAXIMUM
Dragutin Vincek
2010-06-01
Full Text Available The aim of this study was to determine a mathematical model which can be used to describe the growth of domestic animals in an attempt to predict the optimal time of slaughter/weight or the development of body parts or tissues and estimate the biological maximum. The study was conducted on 60 pigs (30 barrows and 30 gilts in the interval between the age of 49 and 215 days. By applying the generalized logistic function, the growth of live weight and tissues were described. The observed gilts reached the inflection point in approximately 121 days (I = 70.7 kg. The point at which the interval of intensive growth starts was at the age of approximately 42 days, (TB=17.35 kg and the saturation point the pigs reached at the age of 200.5 days (TC=126.74 kg. The estimated biological maximum weight of gilts was 179.79 kg. The barrows reached the inflection point in approximately 149 days (I=92.2 kg. The point at which the intensive interval of growth starts was estimated at the age of approximately 52 days (TB=22.93 kg, and the saturation point the barrows reached at the age of 245 days (TC=164.8 kg. The estimated biological maximum weight of barrows was 233.25 kg. Muscle tissue of gilts reached the inflection point (I = 28.46 kg in approximately 110 days. The point at which the interval of intensive growth of muscle tissue starts (TB=6.06 kg was estimated at approximately 53 days, and the saturation point of growth (TC=52.25 kg the muscle tissue of gilts reached at the age of 162 days. The estimated maximum biological growth of muscle tissue in gilts was 75.79 kg. The muscle tissue of barrows reached the inflection point (I=28.78 kg in approximately 118 days, the point at which the interval of intensive growth starts (TB=6.36 kg at the age of approximately 35 days. The saturation point of muscle tissue growth in barrows (TC=52.51 kg was reached at the age of 202 days. The estimated maximum biological growth of muscle tissue in barrows was 75.74 kg. The
Boundary condition effects on maximum groundwater withdrawal in coastal aquifers.
Lu, Chunhui; Chen, Yiming; Luo, Jian
2012-01-01
Prevention of sea water intrusion in coastal aquifers subject to groundwater withdrawal requires optimization of well pumping rates to maximize the water supply while avoiding sea water intrusion. Boundary conditions and the aquifer domain size have significant influences on simulating flow and concentration fields and estimating maximum pumping rates. In this study, an analytical solution is derived based on the potential-flow theory for evaluating maximum groundwater pumping rates in a domain with a constant hydraulic head landward boundary. An empirical correction factor, which was introduced by Pool and Carrera (2011) to account for mixing in the case with a constant recharge rate boundary condition, is found also applicable for the case with a constant hydraulic head boundary condition, and therefore greatly improves the usefulness of the sharp-interface analytical solution. Comparing with the solution for a constant recharge rate boundary, we find that a constant hydraulic head boundary often yields larger estimations of the maximum pumping rate and when the domain size is five times greater than the distance between the well and the coastline, the effect of setting different landward boundary conditions becomes insignificant with a relative difference between two solutions less than 2.5%. These findings can serve as a preliminary guidance for conducting numerical simulations and designing tank-scale laboratory experiments for studying groundwater withdrawal problems in coastal aquifers with minimized boundary condition effects.
Maximum orbit plane change with heat-transfer-rate considerations
Lee, J. Y.; Hull, D. G.
1990-01-01
Two aerodynamic maneuvers are considered for maximizing the plane change of a circular orbit: gliding flight with a maximum thrust segment to regain lost energy (aeroglide) and constant altitude cruise with the thrust being used to cancel the drag and maintain a high energy level (aerocruise). In both cases, the stagnation heating rate is limited. For aeroglide, the controls are the angle of attack, the bank angle, the time at which the burn begins, and the length of the burn. For aerocruise, the maneuver is divided into three segments: descent, cruise, and ascent. During descent the thrust is zero, and the controls are the angle of attack and the bank angle. During cruise, the only control is the assumed-constant angle of attack. During ascent, a maximum thrust segment is used to restore lost energy, and the controls are the angle of attack and bank angle. The optimization problems are solved with a nonlinear programming code known as GRG2. Numerical results for the Maneuverable Re-entry Research Vehicle with a heating-rate limit of 100 Btu/ft(2)-s show that aerocruise gives a maximum plane change of 2 deg, which is only 1 deg larger than that of aeroglide. On the other hand, even though aerocruise requires two thrust levels, the cruise characteristics of constant altitude, velocity, thrust, and angle of attack are easy to control.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Fratamico, Pina M; Wasilenko, Jamie L; Garman, Bradley; Demarco, Daniel R; Varkey, Stephen; Jensen, Mark; Rhoden, Kyle; Tice, George
2014-02-01
The "top-six" non-O157 Shiga toxin-producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) most frequently associated with outbreaks and cases of foodborne illnesses have been declared as adulterants in beef by the U.S. Department of Agriculture Food Safety and Inspection Service (FSIS). Regulatory testing in beef began in June 2012. The purpose of this study was to evaluate the DuPont BAX System method for detecting these top six STEC strains and strains of E. coli O157:H7. For STEC, the BAX System real-time STEC suite was evaluated, including a screening assay for the stx and eae virulence genes and two panel assays to identify the target serogroups: panel 1 detects O26, O111, and O121, and panel 2 detects O45, O103, O145. For E. coli O157:H7, the BAX System real-time PCR assay for this specific serotype was used. Sensitivity of each assay for the PCR targets was ≥1.23 × 10(3) CFU/ml in pure culture. Each assay was 100% inclusive for the strains tested (20 to 50 per assay), and no cross-reactivity with closely related strains was observed in any of the assays. The performance of the BAX System methods was compared with that of the FSIS Microbiology Laboratory Guidebook (MLG) methods for detection of the top six STEC and E. coli O157:H7 strains in ground beef and beef trim. Generally, results of the BAX System method were similar to those of the MLG methods for detecting non-O157 STEC and E. coli O157:H7. Reducing or eliminating novobiocin in modified tryptic soy broth (mTSB) may improve the detection of STEC O111 strains; one beef trim sample inoculated with STEC O111 produced a negative result when enriched in mTSB with 8 mg/liter novobiocin but was positive when enriched in mTSB without novobiocin. The results of this study indicate the feasibility of deploying a panel of real-time PCR assay configurations for the detection and monitoring of the top six STEC and E. coli O157:H7 strains in beef. The approach could easily be adapted
Vigor and Seedling Time of Tobacco Seed Produced in High and Low Altitudes%高低海拔生产的烟草种子其活力及出苗、成苗时间差异
宋碧清; 郑昀晔; 马文广; 牛永志; 索文龙; 李元君
2014-01-01
In order to investigate the possible difference of seed vigor and seedling of tobacco seed produced in high altitude and low altitude, the seeds of MS Yunyan 85, MS Yunyan 87 and MS K326 produced in low altitude (Banna) and high altitude such as Qujing and Zhaotong were germinated under room temperature (26℃) and low temperature (13 ℃), respectively. Comparison of emergence time and seedling time also were conducted in 6 tobacco-plant regions. The results showed that germination rate and germination percentage of seeds were almost the same under room temperature. The seed germination percentage of MS Yunyan 85 and MS Yunyan 87 produced in low altitude were higher than that in high altitude. The seeding time of all seeds was almost the same, which were sown in the same tobacco-plant region in the same year. The seedling time was the same. So the seed vigor and seedling rate of tobacco seed cannot be influenced by the altitude of seed production.%为阐明高海拔和低海拔产地生产的烟草种子是否存在活力及出苗差异，将低海拔产地（西双版纳）和高海拔产地（曲靖、昭通）生产的MS云烟85、MS云烟87、MS K326种子在光照培养箱中分别进行常温（26℃）、低温（13℃）萌发，2012—2013年分别在6个高海拔烟叶产区进行漂浮育苗，比较不同产地收获的种子室内发芽以及室外出苗和成苗速率。结果表明，不同海拔产地生产的烟草种子在常温下萌发，其发芽时间、发芽率没有明显差异，低温条件下，低海拔产地（西双版纳）生产的MS云烟85和MS云烟87种子的发芽率分别优于高海拔产地（曲靖、昭通）生产的种子；不同海拔产地生产的烟草种子在同一年份同一烟区的出苗时间差异不大，成苗时间相同。因此，种子产地的海拔高度不影响烟草种子的活力及出苗速率，种子活力的高低以及出苗的快慢与萌发及出苗时的温度有关。
The maximum intelligible range of the human voice
Boren, Braxton
This dissertation examines the acoustics of the spoken voice at high levels and the maximum number of people that could hear such a voice unamplified in the open air. In particular, it examines an early auditory experiment by Benjamin Franklin which sought to determine the maximum intelligible crowd for the Anglican preacher George Whitefield in the eighteenth century. Using Franklin's description of the experiment and a noise source on Front Street, the geometry and diffraction effects of such a noise source are examined to more precisely pinpoint Franklin's position when Whitefield's voice ceased to be intelligible. Based on historical maps, drawings, and prints, the geometry and material of Market Street is constructed as a computer model which is then used to construct an acoustic cone tracing model. Based on minimal values of the Speech Transmission Index (STI) at Franklin's position, Whitefield's on-axis Sound Pressure Level (SPL) at 1 m is determined, leading to estimates centering around 90 dBA. Recordings are carried out on trained actors and singers to determine their maximum time-averaged SPL at 1 m. This suggests that the greatest average SPL achievable by the human voice is 90-91 dBA, similar to the median estimates for Whitefield's voice. The sites of Whitefield's largest crowds are acoustically modeled based on historical evidence and maps. Based on Whitefield's SPL, the minimal STI value, and the crowd's background noise, this allows a prediction of the minimally intelligible area for each site. These yield maximum crowd estimates of 50,000 under ideal conditions, while crowds of 20,000 to 30,000 seem more reasonable when the crowd was reasonably quiet and Whitefield's voice was near 90 dBA.
Promoter recognition based on the maximum entropy hidden Markov model.
Zhao, Xiao-yu; Zhang, Jin; Chen, Yuan-yuan; Li, Qiang; Yang, Tao; Pian, Cong; Zhang, Liang-yun
2014-08-01
Since the fast development of genome sequencing has produced large scale data, the current work uses the bioinformatics methods to recognize different gene regions, such as exon, intron and promoter, which play an important role in gene regulations. In this paper, we introduce a new method based on the maximum entropy Markov model (MEMM) to recognize the promoter, which utilizes the biological features of the promoter for the condition. However, it leads to a high false positive rate (FPR). In order to reduce the FPR, we provide another new method based on the maximum entropy hidden Markov model (ME-HMM) without the independence assumption, which could also accommodate the biological features effectively. To demonstrate the precision, the new methods are implemented by R language and the hidden Markov model (HMM) is introduced for comparison. The experimental results show that the new methods may not only overcome the shortcomings of HMM, but also have their own advantages. The results indicate that, MEMM is excellent for identifying the conserved signals, and ME-HMM can demonstrably improve the true positive rate.
Color Image Enhancement Based on Maximum Fuzzy Entropy
QU Yi; XU Li-hong; KANG Qi
2004-01-01
A color image enhancement approach based on maximum fuzzy entropy and genetic algorithm is proposed in this paper. It enhances color images by stretching the contrast of S and I components respectively in the HSI color representation. The image is transformed from the property domain to the fuzzy domain with S-function. To preserve as much information as possible in the fuzzy the domain, the fuzzy entropy function is used as objective function in a genetic algorithm to optimize three parameters of the S-function. The Sigmoid function is applied to intensify the membership values and the results are transformed back to the property domain to produce the enhanced image. Experiments show the effectiveness of the approach.
Maximum Flow in Planar Networks with Exponentially Distributed Arc Capacities.
1984-12-01
avoid constructing the dual, are described in Itai and Shiloach P 97 91. In this paper, we consider the maximum flow problem in (st) planar networks...use arc e and lies completely below P. If no such path exists we say P(e) - *. An algorithm tc construct P(e) given P and e is described in Itai and...suggested in Ford and Fulkerson [1956], developed in Berge and Ghouila-Houri [1962] and its time complexity is reduced to 0( IVI log IVI ) by Itai and
Implementation of the Maximum Entropy Method for Analytic Continuation
Levy, Ryan; Gull, Emanuel
2016-01-01
We present $\\texttt{Maxent}$, a tool for performing analytic continuation of spectral functions using the maximum entropy method. The code operates on discrete imaginary axis datasets (values with uncertainties) and transforms this input to the real axis. The code works for imaginary time and Matsubara frequency data and implements the 'Legendre' representation of finite temperature Green's functions. It implements a variety of kernels, default models, and grids for continuing bosonic, fermionic, anomalous, and other data. Our implementation is licensed under GPLv2 and extensively documented. This paper shows the use of the programs in detail.